source
stringclasses
469 values
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
0_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
0_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
[[open-in-colab]] Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, it needs to be converted and assembled into batches of tensors. 🤗 Transformers provides a set of preprocessing classes to help prepare your data for the model. In this tutorial, you'll learn that for:
0_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
* Text, use a [Tokenizer](./main_classes/tokenizer) to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors. * Speech and audio, use a [Feature extractor](./main_classes/feature_extractor) to extract sequential features from audio waveforms and convert them into tensors. * Image inputs use a [ImageProcessor](./main_classes/image_processor) to convert images into tensors.
0_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
* Image inputs use a [ImageProcessor](./main_classes/image_processor) to convert images into tensors. * Multimodal inputs, use a [Processor](./main_classes/processors) to combine a tokenizer and a feature extractor or image processor. <Tip> `AutoProcessor` **always** works and automatically chooses the correct class for the model you're using, whether you're using a tokenizer, image processor, feature extractor or processor. </Tip>
0_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
</Tip> Before you begin, install 🤗 Datasets so you can load some datasets to experiment with: ```bash pip install datasets ```
0_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
<Youtube id="Yffk5aydLzg"/> The main tool for preprocessing textual data is a [tokenizer](main_classes/tokenizer). A tokenizer splits text into *tokens* according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are added by the tokenizer. <Tip>
0_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
<Tip> If you plan on using a pretrained model, it's important to use the associated pretrained tokenizer. This ensures the text is split the same way as the pretraining corpus, and uses the same corresponding tokens-to-index (usually referred to as the *vocab*) during pretraining. </Tip> Get started by loading a pretrained tokenizer with the [`AutoTokenizer.from_pretrained`] method. This downloads the *vocab* a model was pretrained with: ```py >>> from transformers import AutoTokenizer
0_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased") ``` Then pass your text to the tokenizer: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
0_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` The tokenizer returns a dictionary with three important items: * [input_ids](glossary#input-ids) are the indices corresponding to each token in the sentence. * [attention_mask](glossary#attention-mask) indicates whether a token should be attended to or not.
0_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
* [attention_mask](glossary#attention-mask) indicates whether a token should be attended to or not. * [token_type_ids](glossary#token-type-ids) identifies which sequence a token belongs to when there is more than one sequence. Return your input by decoding the `input_ids`: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ```
0_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
'[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` As you can see, the tokenizer added two special tokens - `CLS` and `SEP` (classifier and separator) - to the sentence. Not all models need special tokens, but if they do, the tokenizer automatically adds them for you. If there are several sentences you want to preprocess, pass them as a list to the tokenizer: ```py >>> batch_sentences = [ ... "But what about second breakfast?",
0_2_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0],
0_2_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
[101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ```
0_2_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
Sentences aren't always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special *padding token* to shorter sentences. Set the `padding` parameter to `True` to pad the shorter sequences in the batch to match the longest sequence: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.",
0_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
0_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` The first and third sentences are now padded with `0`'s because they are shorter.
0_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you'll need to truncate the sequence to a shorter length. Set the `truncation` parameter to `True` to truncate a sequence to the maximum length accepted by the model: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ]
0_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
0_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` <Tip> Check out the [Padding and truncation](./pad_truncation) concept guide to learn more different padding and truncation arguments. </Tip>
0_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
Finally, you want the tokenizer to return the actual tensors that get fed to the model. Set the `return_tensors` parameter to either `pt` for PyTorch, or `tf` for TensorFlow: <frameworkcontent> <pt> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input)
0_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
0_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` </pt> <tf> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.",
0_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
0_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
0_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` </tf> </frameworkcontent> <Tip> Different pipelines support tokenizer arguments in their `__call__()` differently. `text-2-text-generation` pipelines support (i.e. pass on)
0_5_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
only `truncation`. `text-generation` pipelines support `max_length`, `truncation`, `padding` and `add_special_tokens`. In `fill-mask` pipelines, tokenizer arguments can be passed in the `tokenizer_kwargs` argument (dictionary). </Tip>
0_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
For audio tasks, you'll need a [feature extractor](main_classes/feature_extractor) to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors. Load the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset (see the 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use a feature extractor with audio datasets: ```py
0_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
```py >>> from datasets import load_dataset, Audio
0_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Access the first element of the `audio` column to take a look at the input. Calling the `audio` column automatically loads and resamples the audio file: ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32),
0_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
{'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` This returns three items: * `array` is the speech signal loaded - and potentially resampled - as a 1D array. * `path` points to the location of the audio file.
0_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
* `path` points to the location of the audio file. * `sampling_rate` refers to how many data points in the speech signal are measured per second.
0_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
* `sampling_rate` refers to how many data points in the speech signal are measured per second. For this tutorial, you'll use the [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) model. Take a look at the model card, and you'll learn Wav2Vec2 is pretrained on 16kHz sampled speech audio. It is important your audio data's sampling rate matches the sampling rate of the dataset used to pretrain the model. If your data's sampling rate isn't the same, then you need to resample your data.
0_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
1. Use 🤗 Datasets' [`~datasets.Dataset.cast_column`] method to upsample the sampling rate to 16kHz: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. Call the `audio` column again to resample the audio file: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32),
0_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` Next, load a feature extractor to normalize and pad the input. When padding textual data, a `0` is added for shorter sequences. The same idea applies to audio data. The feature extractor adds a `0` - interpreted as silence - to `array`.
0_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
Load the feature extractor with [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor
0_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` Pass the audio `array` to the feature extractor. We also recommend adding the `sampling_rate` argument in the feature extractor in order to better debug any silent errors that may occur. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ...,
0_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
{'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` Just like the tokenizer, you can apply padding or truncation to handle variable sequences in a batch. Take a look at the sequence length of these two audio samples: ```py >>> dataset[0]["audio"]["array"].shape (173398,)
0_6_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> dataset[1]["audio"]["array"].shape (106496,) ``` Create a function to preprocess the dataset so the audio samples are the same lengths. Specify a maximum sample length, and the feature extractor will either pad or truncate the sequences to match it: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True,
0_6_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` Apply the `preprocess_function` to the first few examples in the dataset: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` The sample lengths are now the same and match the specified maximum length. You can pass your processed dataset to the model now! ```py
0_6_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
```py >>> processed_dataset["input_values"][0].shape (100000,)
0_6_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> processed_dataset["input_values"][1].shape (100000,) ```
0_6_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
For computer vision tasks, you'll need an [image processor](main_classes/image_processor) to prepare your dataset for the model. Image preprocessing consists of several steps that convert images into the input expected by the model. These steps include but are not limited to resizing, normalizing, color channel correction, and converting images to tensors. <Tip> Image preprocessing often follows some form of image augmentation. Both image preprocessing and image augmentation
0_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
<Tip> Image preprocessing often follows some form of image augmentation. Both image preprocessing and image augmentation transform image data, but they serve different purposes: * Image augmentation alters images in a way that can help prevent overfitting and increase the robustness of the model. You can get creative in how you augment your data - adjust brightness and colors, crop, rotate, resize, zoom, etc. However, be mindful not to change the meaning of the images with your augmentations.
0_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
* Image preprocessing guarantees that the images match the model’s expected input format. When fine-tuning a computer vision model, images must be preprocessed exactly as when the model was initially trained. You can use any library you like for image augmentation. For image preprocessing, use the `ImageProcessor` associated with the model. </Tip>
0_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
</Tip> Load the [food101](https://huggingface.co/datasets/food101) dataset (see the 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use an image processor with computer vision datasets: <Tip> Use 🤗 Datasets `split` parameter to only load a small sample from the training split since the dataset is quite large! </Tip> ```py >>> from datasets import load_dataset
0_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> dataset = load_dataset("food101", split="train[:100]") ``` Next, take a look at the image with 🤗 Datasets [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) feature: ```py >>> dataset[0]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"/> </div> Load the image processor with [`AutoImageProcessor.from_pretrained`]:
0_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
</div> Load the image processor with [`AutoImageProcessor.from_pretrained`]: ```py >>> from transformers import AutoImageProcessor
0_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ```
0_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
First, let's add some image augmentation. You can use any library you prefer, but in this tutorial, we'll use torchvision's [`transforms`](https://pytorch.org/vision/stable/transforms.html) module. If you're interested in using another data augmentation library, learn how in the [Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) or [Kornia
0_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
or [Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb).
0_7_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
1. Here we use [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) to chain together a couple of transforms - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) and [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html). Note that for resizing, we can get the image size requirements from the `image_processor`. For some models, an exact height and
0_7_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
Note that for resizing, we can get the image size requirements from the `image_processor`. For some models, an exact height and width are expected, for others only the `shortest_edge` is defined. ```py >>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose
0_7_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... )
0_7_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)]) ``` 2. The model accepts [`pixel_values`](model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) as its input. `ImageProcessor` can take care of normalizing the images, and generating appropriate tensors. Create a function that combines image augmentation and image preprocessing for a batch of images and generates `pixel_values`: ```py >>> def transforms(examples):
0_7_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
```py >>> def transforms(examples): ... images = [_transforms(img.convert("RGB")) for img in examples["image"]] ... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"] ... return examples ``` <Tip> In the example above we set `do_resize=False` because we have already resized the images in the image augmentation transformation,
0_7_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
and leveraged the `size` attribute from the appropriate `image_processor`. If you do not resize images during image augmentation, leave this parameter out. By default, `ImageProcessor` will handle the resizing. If you wish to normalize images as a part of the augmentation transformation, use the `image_processor.image_mean`, and `image_processor.image_std` values. </Tip> 3. Then use 🤗 Datasets[`~datasets.Dataset.set_transform`] to apply the transforms on the fly: ```py
0_7_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
</Tip> 3. Then use 🤗 Datasets[`~datasets.Dataset.set_transform`] to apply the transforms on the fly: ```py >>> dataset.set_transform(transforms) ``` 4. Now when you access the image, you'll notice the image processor has added `pixel_values`. You can pass your processed dataset to the model now! ```py >>> dataset[0].keys() ``` Here is what the image looks like after the transforms are applied. The image has been randomly cropped and it's color properties are different. ```py
0_7_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
```py >>> import numpy as np >>> import matplotlib.pyplot as plt
0_7_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"/> </div> <Tip> For tasks like object detection, semantic segmentation, instance segmentation, and panoptic segmentation, `ImageProcessor` offers post processing methods. These methods convert model's raw outputs into meaningful predictions such as bounding boxes,
0_7_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
offers post processing methods. These methods convert model's raw outputs into meaningful predictions such as bounding boxes, or segmentation maps. </Tip>
0_7_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
In some cases, for instance, when fine-tuning [DETR](./model_doc/detr), the model applies scale augmentation at training time. This may cause images to be different sizes in a batch. You can use [`DetrImageProcessor.pad`] from [`DetrImageProcessor`] and define a custom `collate_fn` to batch images together. ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt")
0_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ```
0_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
For tasks involving multimodal inputs, you'll need a [processor](main_classes/processors) to prepare your dataset for the model. A processor couples together two processing objects such as tokenizer and feature extractor. Load the [LJ Speech](https://huggingface.co/datasets/lj_speech) dataset (see the 🤗 [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use a processor for automatic speech recognition (ASR): ```py
0_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
```py >>> from datasets import load_dataset
0_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> lj_speech = load_dataset("lj_speech", split="train") ``` For ASR, you're mainly focused on `audio` and `text` so you can remove the other columns: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` Now take a look at the `audio` and `text` columns: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32),
0_9_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050}
0_9_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` Remember you should always [resample](preprocessing#audio) your audio dataset's sampling rate to match the sampling rate of the dataset used to pretrain a model! ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` Load a processor with [`AutoProcessor.from_pretrained`]: ```py
0_9_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
``` Load a processor with [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor
0_9_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. Create a function to process the audio data contained in `array` to `input_values`, and tokenize `text` to `labels`. These are the inputs to the model: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000))
0_9_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md
.md
... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. Apply the `prepare_dataset` function to a sample: ```py >>> prepare_dataset(lj_speech[0]) ``` The processor has now added `input_values` and `labels`, and the sampling rate has also been correctly downsampled to 16kHz. You can pass your processed dataset to the model now!
0_9_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/sagemaker.md
.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/sagemaker.md
.md
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
1_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/sagemaker.md
.md
The documentation has been moved to [hf.co/docs/sagemaker](https://huggingface.co/docs/sagemaker). This page will be removed in `transformers` 5.0.
1_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/sagemaker.md
.md
- [Train Hugging Face models on Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/train) - [Deploy Hugging Face models to Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/inference)
1_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
2_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
2_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
The last two tutorials showed how you can fine-tune a model with PyTorch, Keras, and 🤗 Accelerate for distributed setups. The next step is to share your model with the community! At Hugging Face, we believe in openly sharing knowledge and resources to democratize artificial intelligence for everyone. We encourage you to consider sharing your model with the community to help others save time and resources.
2_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
In this tutorial, you will learn two methods for sharing a trained or fine-tuned model on the [Model Hub](https://huggingface.co/models): - Programmatically push your files to the Hub. - Drag-and-drop your files to the Hub with the web interface. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip>
2_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
picture-in-picture" allowfullscreen></iframe> <Tip> To share a model with the community, you need an account on [huggingface.co](https://huggingface.co/join). You can also join an existing organization or create a new one. </Tip>
2_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
Each repository on the Model Hub behaves like a typical GitHub repository. Our repositories offer versioning, commit history, and the ability to visualize differences. The Model Hub's built-in versioning is based on git and [git-lfs](https://git-lfs.github.com/). In other words, you can treat one model as one repository, enabling greater access control and scalability. Version control allows *revisions*, a method for pinning a specific version of a model with a commit hash, tag or branch.
2_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
As a result, you can load a specific model version with the `revision` parameter: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="4c77982" # tag name, or branch name, or commit hash ... ) ``` Files are also easily edited in a repository, and you can view the commit history as well as the differences: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png)
2_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where 🤗 Transformers is installed. This will store your access token in your Hugging Face cache folder (`~/.cache/` by default): ```bash huggingface-cli login ```
2_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
```bash huggingface-cli login ``` If you are using a notebook like Jupyter or Colaboratory, make sure you have the [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) library installed. This library allows you to programmatically interact with the Hub. ```bash pip install huggingface_hub ``` Then use `notebook_login` to sign-in to the Hub, and follow the link [here](https://huggingface.co/settings/token) to generate a token to login with: ```py
2_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
```py >>> from huggingface_hub import notebook_login
2_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
>>> notebook_login() ```
2_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
To ensure your model can be used by someone working with a different framework, we recommend you convert and upload your model with both PyTorch and TensorFlow checkpoints. While users are still able to load your model from a different framework if you skip this step, it will be slower because 🤗 Transformers will need to convert the checkpoint on-the-fly.
2_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
Converting a checkpoint for another framework is easy. Make sure you have PyTorch and TensorFlow installed (see [here](installation) for installation instructions), and then find the specific model for your task in the other framework. <frameworkcontent> <pt> Specify `from_tf=True` to convert a checkpoint from TensorFlow to PyTorch: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True)
2_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> Specify `from_pt=True` to convert a checkpoint from PyTorch to TensorFlow: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` Then you can save your new TensorFlow model with its new checkpoint: ```py
2_4_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
``` Then you can save your new TensorFlow model with its new checkpoint: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> If a model is available in Flax, you can also convert a checkpoint from PyTorch to Flax: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent>
2_4_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
<frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Sharing a model to the Hub is as simple as adding an extra parameter or callback. Remember from the [fine-tuning tutorial](training), the [`TrainingArguments`] class is where you specify hyperparameters and additional training options. One of these training options includes the ability to push a model directly to the Hub. Set `push_to_hub=True` in your [`TrainingArguments`]: ```py
2_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` Pass your training arguments as usual to [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ```
2_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` After you fine-tune your model, call [`~transformers.Trainer.push_to_hub`] on [`Trainer`] to push the trained model to the Hub. 🤗 Transformers will even automatically add training hyperparameters, training results and framework versions to your model card! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Share a model to the Hub with [`PushToHubCallback`]. In the [`PushToHubCallback`] function, add:
2_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
``` </pt> <tf> Share a model to the Hub with [`PushToHubCallback`]. In the [`PushToHubCallback`] function, add: - An output directory for your model. - A tokenizer. - The `hub_model_id`, which is your Hub username and model name. ```py >>> from transformers import PushToHubCallback
2_5_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
>>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` Add the callback to [`fit`](https://keras.io/api/models/model_training_apis/), and 🤗 Transformers will push the trained model to the Hub: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent>
2_5_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
You can also call `push_to_hub` directly on your model to upload it to the Hub. Specify your model name in `push_to_hub`: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` This creates a repository under your username with the model name `my-awesome-model`. Users can now load your model with the `from_pretrained` function: ```py >>> from transformers import AutoModel
2_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
>>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` If you belong to an organization and want to push your model under the organization name instead, just add it to the `repo_id`: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` The `push_to_hub` function can also be used to add other files to a model repository. For example, add a tokenizer to a model repository: ```py >>> tokenizer.push_to_hub("my-awesome-model") ```
2_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
```py >>> tokenizer.push_to_hub("my-awesome-model") ``` Or perhaps you'd like to add the TensorFlow version of your fine-tuned PyTorch model: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` Now when you navigate to your Hugging Face profile, you should see your newly created model repository. Clicking on the **Files** tab will display all the files you've uploaded to the repository.
2_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
For more details on how to create and upload files to a repository, refer to the Hub documentation [here](https://huggingface.co/docs/hub/how-to-upstream).
2_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md
.md
Users who prefer a no-code approach are able to upload a model through the Hub's web interface. Visit [huggingface.co/new](https://huggingface.co/new) to create a new repository: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) From here, add some information about your model: - Select the **owner** of the repository. This can be yourself or any of the organizations you belong to.
2_7_0
README.md exists but content is empty.
Downloads last month
0