Update README.md
Browse files
README.md
CHANGED
@@ -28,6 +28,12 @@ The **Fine-Tuned DistilBERT** is a variant of the BERT transformer model,
|
|
28 |
distilled for efficient performance while maintaining high accuracy.
|
29 |
It has been adapted and fine-tuned for the specific task of classifying user intent in text data.
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
The model, named "Falconsai/fear_mongering_detection" is pre-trained on a substantial amount of text data,
|
32 |
which allows it to capture semantic nuances and contextual information present in natural language text.
|
33 |
It has been fine-tuned with meticulous attention to hyperparameter settings, including batch size and learning rate, to ensure optimal model performance for the user intent classification task.
|
@@ -38,9 +44,7 @@ ensuring the model not only learns quickly but also steadily refines its capabil
|
|
38 |
|
39 |
This model has been trained on a rather small dataset of under 50k, 100 epochs, specifically designed for "Fear Mongering Identification".
|
40 |
|
41 |
-
|
42 |
-
The goal of this meticulous training process is to equip the model with the ability to identify instances of Fear Mongering in text data effectively, making it ready to contribute to a wide range of applications involving user interaction analysis and personalization.
|
43 |
-
|
44 |
|
45 |
|
46 |
### How to Use
|
@@ -55,8 +59,6 @@ classifier(statement)
|
|
55 |
|
56 |
```
|
57 |
|
58 |
-
|
59 |
-
|
60 |
## Model Details
|
61 |
|
62 |
- **Model Name:** Falconsai/fear_mongering_detection
|
@@ -78,10 +80,6 @@ classifier(statement)
|
|
78 |
|
79 |
- **Description:** Online platforms and forums can deploy the model to automatically flag or filter out content that may be perceived as fear-mongering. This helps maintain a more positive and constructive online environment.
|
80 |
|
81 |
-
## Ethical Considerations
|
82 |
-
|
83 |
-
- **Bias Mitigation:** [Describe any steps taken to mitigate bias during training or evaluation.]
|
84 |
-
- **Data Privacy:** [Discuss any privacy considerations related to the proprietary dataset.]
|
85 |
|
86 |
## Limitations
|
87 |
|
|
|
28 |
distilled for efficient performance while maintaining high accuracy.
|
29 |
It has been adapted and fine-tuned for the specific task of classifying user intent in text data.
|
30 |
|
31 |
+
|
32 |
+
### Definition
|
33 |
+
Fear Monger:
|
34 |
+
/ˈfɪrˌmʌŋ.ɡɚ/ to intentionally try to make people afraid of something when this is not necessary or reasonable.
|
35 |
+
|
36 |
+
|
37 |
The model, named "Falconsai/fear_mongering_detection" is pre-trained on a substantial amount of text data,
|
38 |
which allows it to capture semantic nuances and contextual information present in natural language text.
|
39 |
It has been fine-tuned with meticulous attention to hyperparameter settings, including batch size and learning rate, to ensure optimal model performance for the user intent classification task.
|
|
|
44 |
|
45 |
This model has been trained on a rather small dataset of under 50k, 100 epochs, specifically designed for "Fear Mongering Identification".
|
46 |
|
47 |
+
The goal of this meticulous training process is to equip the model with the ability to identify instances of Fear Mongering in text data effectively, making it ready to contribute to a wide range of applications involving human speech, text and generated content applications.
|
|
|
|
|
48 |
|
49 |
|
50 |
### How to Use
|
|
|
59 |
|
60 |
```
|
61 |
|
|
|
|
|
62 |
## Model Details
|
63 |
|
64 |
- **Model Name:** Falconsai/fear_mongering_detection
|
|
|
80 |
|
81 |
- **Description:** Online platforms and forums can deploy the model to automatically flag or filter out content that may be perceived as fear-mongering. This helps maintain a more positive and constructive online environment.
|
82 |
|
|
|
|
|
|
|
|
|
83 |
|
84 |
## Limitations
|
85 |
|