Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,12 @@ The goal of this meticulous training process is to equip the model with the abil
|
|
29 |
### Intended Uses
|
30 |
- **Offensive/Hate Speech Detection**: The primary intended use of this model is to detect offensive or hate speech in text data. It is well-suited for filtering and identifying inappropriate content in various applications.
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
### How to Use
|
33 |
To use this model for offensive/hate speech detection, you can follow these steps:
|
34 |
```markdown
|
|
|
29 |
### Intended Uses
|
30 |
- **Offensive/Hate Speech Detection**: The primary intended use of this model is to detect offensive or hate speech in text data. It is well-suited for filtering and identifying inappropriate content in various applications.
|
31 |
|
32 |
+
- **Of Special Note**: The data suggests the word "like" is most often used as a comparative statement in the derogatory.
|
33 |
+
- These have numerous instances within the "Offensive Speech Dataset". "You look like X" or "He smells like X" are quite common.
|
34 |
+
- Also of note, the ABSENCE/LACK OF of punctuation lends itself heavily to the "Offensive" dataset.
|
35 |
+
- Accordingly the model will idenify these as well based on these being prominent in the data.
|
36 |
+
|
37 |
+
|
38 |
### How to Use
|
39 |
To use this model for offensive/hate speech detection, you can follow these steps:
|
40 |
```markdown
|