Commit
·
c6a1bd4
1
Parent(s):
1844e69
Update README.md
Browse files
README.md
CHANGED
@@ -106,7 +106,7 @@ alt="drawing" width="700"/>
|
|
106 |
- For sampled FLAN data:
|
107 |
- We follow their original data format, i.e., we did not set special tokens to separate in-context learning examples.
|
108 |
- In summary:
|
109 |
-
- We recommend you use our format and add our special tokens (such as `<USER>` and `<SYSTEM>` ) to get better performance. However, you may not necessary need to exactly follow our format if you do observe random behavios.
|
110 |
- We found that T5 model series such as Flan-t5 and DialogStudio-T5 may generate repetitive tokens during inference. If you find such repetition issues, you can set the `repetition_penalty` in model.generate(), such as 1.5, to mitigate them. Note that `repetition_penalty=1.0` by default.
|
111 |
# Usage
|
112 |
|
|
|
106 |
- For sampled FLAN data:
|
107 |
- We follow their original data format, i.e., we did not set special tokens to separate in-context learning examples.
|
108 |
- In summary:
|
109 |
+
- We recommend you use our format and add our special tokens (such as `<USER>` and `<SYSTEM>` ) to get better performance. However, you may not necessary need to exactly follow our format if you do not observe random behavios.
|
110 |
- We found that T5 model series such as Flan-t5 and DialogStudio-T5 may generate repetitive tokens during inference. If you find such repetition issues, you can set the `repetition_penalty` in model.generate(), such as 1.5, to mitigate them. Note that `repetition_penalty=1.0` by default.
|
111 |
# Usage
|
112 |
|