pmolchanov commited on
Commit
3478fb8
·
verified ·
1 Parent(s): d6321f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +150 -143
README.md CHANGED
@@ -1,143 +1,150 @@
1
- ---
2
- license: other
3
- license_name: nvidia-open-model-license
4
- license_link: >-
5
- https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
6
- ---
7
-
8
- # Model Overview
9
-
10
- Minitron-4B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
11
-
12
- Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
13
-
14
- This model is for research and development only.
15
-
16
- **Model Developer:** NVIDIA
17
-
18
- **Model Dates:** Minitron-4B-Base was trained between February 2024 and June 2024.
19
-
20
- ## License
21
-
22
- Minitron-4B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
23
-
24
- ## Model Architecture
25
-
26
- Minitron-4B-Base uses a model embedding size of 3072, 32 attention heads, and an MLP intermediate dimension of 9216.
27
- It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
28
-
29
- **Architecture Type:** Transformer Decoder (auto-regressive language model)
30
-
31
- **Network Architecture:** Nemotron-4
32
-
33
- **Input Type:** Text
34
-
35
- **Input Format:** String
36
-
37
- **Input Parameters:** None
38
-
39
- **Other Properties Related to Input:** None
40
-
41
- **Output Type:** Text
42
-
43
- **Output Format:** String
44
-
45
- **Output Parameters:** None
46
-
47
- **Other Properties Related to Output:** None
48
-
49
- ## Usage
50
-
51
- Support for Nemotron models will be added in the upcoming transformers library release. In the meantime, please install the library from source:
52
-
53
- ```
54
- pip install git+https://github.com/huggingface/transformers
55
- ```
56
-
57
- The following code provides an example of how to load the Minitron-4B-Base model and use it to perform text generation.
58
-
59
- ```python
60
- import torch
61
- from transformers import AutoTokenizer, AutoModelForCausalLM
62
-
63
- # Load the tokenizer and model
64
- model_path = 'nvidia/Minitron-4B-Base'
65
- tokenizer = AutoTokenizer.from_pretrained(model_path)
66
-
67
- device = 'cuda'
68
- dtype = torch.bfloat16
69
- model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
70
-
71
- # Prepare the input text
72
- prompt = 'Complete the paragraph: our solar system is'
73
- inputs = tokenizer.encode(prompt, return_tensors='pt').to(model.device)
74
-
75
- # Generate the output
76
- outputs = model.generate(inputs, max_length=20)
77
-
78
- # Decode and print the output
79
- output_text = tokenizer.decode(outputs[0])
80
- print(output_text)
81
- ```
82
-
83
- ## Dataset & Training
84
-
85
- **Data Collection Method:** Hybrid
86
-
87
- **Labeling Method:** Not Applicable
88
-
89
- **Properties:** The training corpus for Minitron-4B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
90
-
91
- **Data Freshness:** The pretraining data has a cutoff of June 2023.
92
-
93
- ## Evaluation Results
94
-
95
- *5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
96
-
97
- | Average |
98
- | :---- |
99
- | 58.6 |
100
-
101
- *Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
102
-
103
- | HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
104
- | :------------- | :------------- | :------------- | :------------- | :------------- |
105
- | 75.0 | 74.0 | 24.1 | 50.9 | 29.5
106
-
107
-
108
- *Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
109
-
110
- | p@1, 0-Shot |
111
- | :------------- |
112
- | 23.3 |
113
-
114
- Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results.
115
-
116
- ## Inference
117
- **Engine:** TensorRT-LLM
118
-
119
- **Test Hardware:** NVIDIA A100
120
-
121
- **DType:** Float16/BFloat16
122
-
123
- ## Limitations
124
-
125
- The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
126
-
127
- ## Ethical Considerations
128
-
129
- NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
130
-
131
-
132
- ## Citation
133
-
134
- If you find our work helpful, please consider citing our paper:
135
- ```
136
- @article{minitron2024,
137
- title={Compact Language Models via Pruning and Knowledge Distillation},
138
- author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
139
- journal={arXiv preprint arXiv:2407.14679},
140
- year={2024},
141
- url={https://arxiv.org/abs/2407.14679},
142
- }
143
- ```
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: >-
5
+ https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
+ language:
9
+ - en
10
+ tags:
11
+ - nvidia
12
+ - llama-3
13
+ - pytorch
14
+ ---
15
+ # Model Overview
16
+
17
+ Minitron-4B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
18
+
19
+ Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
20
+
21
+ This model is for research and development only.
22
+
23
+ **Model Developer:** NVIDIA
24
+
25
+ **Model Dates:** Minitron-4B-Base was trained between February 2024 and June 2024.
26
+
27
+ ## License
28
+
29
+ Minitron-4B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
30
+
31
+ ## Model Architecture
32
+
33
+ Minitron-4B-Base uses a model embedding size of 3072, 32 attention heads, and an MLP intermediate dimension of 9216.
34
+ It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
35
+
36
+ **Architecture Type:** Transformer Decoder (auto-regressive language model)
37
+
38
+ **Network Architecture:** Nemotron-4
39
+
40
+ **Input Type:** Text
41
+
42
+ **Input Format:** String
43
+
44
+ **Input Parameters:** None
45
+
46
+ **Other Properties Related to Input:** None
47
+
48
+ **Output Type:** Text
49
+
50
+ **Output Format:** String
51
+
52
+ **Output Parameters:** None
53
+
54
+ **Other Properties Related to Output:** None
55
+
56
+ ## Usage
57
+
58
+ Support for Nemotron models will be added in the upcoming transformers library release. In the meantime, please install the library from source:
59
+
60
+ ```
61
+ pip install git+https://github.com/huggingface/transformers
62
+ ```
63
+
64
+ The following code provides an example of how to load the Minitron-4B-Base model and use it to perform text generation.
65
+
66
+ ```python
67
+ import torch
68
+ from transformers import AutoTokenizer, AutoModelForCausalLM
69
+
70
+ # Load the tokenizer and model
71
+ model_path = 'nvidia/Minitron-4B-Base'
72
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
73
+
74
+ device = 'cuda'
75
+ dtype = torch.bfloat16
76
+ model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype, device_map=device)
77
+
78
+ # Prepare the input text
79
+ prompt = 'Complete the paragraph: our solar system is'
80
+ inputs = tokenizer.encode(prompt, return_tensors='pt').to(model.device)
81
+
82
+ # Generate the output
83
+ outputs = model.generate(inputs, max_length=20)
84
+
85
+ # Decode and print the output
86
+ output_text = tokenizer.decode(outputs[0])
87
+ print(output_text)
88
+ ```
89
+
90
+ ## Dataset & Training
91
+
92
+ **Data Collection Method:** Hybrid
93
+
94
+ **Labeling Method:** Not Applicable
95
+
96
+ **Properties:** The training corpus for Minitron-4B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
97
+
98
+ **Data Freshness:** The pretraining data has a cutoff of June 2023.
99
+
100
+ ## Evaluation Results
101
+
102
+ *5-shot performance.* Language Understanding evaluated using [Massive Multitask Language Understanding](https://arxiv.org/abs/2009.03300):
103
+
104
+ | Average |
105
+ | :---- |
106
+ | 58.6 |
107
+
108
+ *Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
109
+
110
+ | HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
111
+ | :------------- | :------------- | :------------- | :------------- | :------------- |
112
+ | 75.0 | 74.0 | 24.1 | 50.9 | 29.5
113
+
114
+
115
+ *Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
116
+
117
+ | p@1, 0-Shot |
118
+ | :------------- |
119
+ | 23.3 |
120
+
121
+ Please refer to our [paper](https://arxiv.org/abs/2407.14679) for the full set of results.
122
+
123
+ ## Inference
124
+ **Engine:** TensorRT-LLM
125
+
126
+ **Test Hardware:** NVIDIA A100
127
+
128
+ **DType:** Float16/BFloat16
129
+
130
+ ## Limitations
131
+
132
+ The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
133
+
134
+ ## Ethical Considerations
135
+
136
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
137
+
138
+
139
+ ## Citation
140
+
141
+ If you find our work helpful, please consider citing our paper:
142
+ ```
143
+ @article{minitron2024,
144
+ title={Compact Language Models via Pruning and Knowledge Distillation},
145
+ author={Saurav Muralidharan and Sharath Turuvekere Sreenivas and Raviraj Joshi and Marcin Chochowski and Mostofa Patwary and Mohammad Shoeybi and Bryan Catanzaro and Jan Kautz and Pavlo Molchanov},
146
+ journal={arXiv preprint arXiv:2407.14679},
147
+ year={2024},
148
+ url={https://arxiv.org/abs/2407.14679},
149
+ }
150
+ ```