Student Chat Toxicity Classifier
This model is a fine-tuned version of the s-nlp/roberta_toxicity_classifier
and is designed to classify text-based messages in student conversations as toxic or non-toxic. It is specifically tailored to detect and flag malpractice suggestions, unethical advice, or any toxic communication while encouraging ethical and positive interactions among students.
π Try the model live in this Hugging Face Space π
Model Details
- Language: English (
en
) - Base Model:
s-nlp/roberta_toxicity_classifier
- Task: Text Classification (Binary)
- Class 0: Non-Toxic
- Class 1: Toxic
Key Features
- Detects messages promoting cheating or malpractice.
- Flags harmful or unethical advice in student chats.
- Encourages ethical and constructive communication.
Training Details
- Dataset: The model was fine-tuned on a custom dataset containing examples of student conversations labeled as toxic (malpractice suggestions, harmful advice) or non-toxic (positive and constructive communication).
- Preprocessing:
- Tokenization using
RobertaTokenizer
. - Truncation and padding applied for consistent input length (
max_length=128
).
- Tokenization using
- Framework: Hugging Face's
transformers
library. - Optimizer:
AdamW
- Loss Function:
CrossEntropyLoss
- Epochs: 3 (adjusted for convergence)
Intended Use
This model is intended for educational platforms, chat moderation tools, and student communication apps. Its purpose is to:
- Detect toxic messages, such as cheating suggestions, harmful advice, or unethical recommendations.
- Promote a positive and respectful chat environment for students.
Use it with Gradio API:
from gradio_client import Client
client = Client("Sk1306/Student_Ethics_Chat_Classifier")
result = client.predict(
text="you can copy in exam to pass!!",
api_name="/predict"
)
print(result)
By loading Model
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification
# Load the model and tokenizer
model_name = "Sk1306/student_chat_toxicity_classifier_model"
tokenizer = RobertaTokenizer.from_pretrained(model_name)
model = RobertaForSequenceClassification.from_pretrained(model_name)
# Function for toxicity prediction
def predict_toxicity(text):
# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
# Run the text through the model
with torch.no_grad():
outputs = model(**inputs)
# Extract logits and apply softmax to get probabilities
logits = outputs.logits
probabilities = torch.nn.functional.softmax(logits, dim=-1)
# Get the predicted class (0 = Non-Toxic, 1 = Toxic)
predicted_class = torch.argmax(probabilities, dim=-1).item()
return "Non-Toxic" if predicted_class == 0 else "Toxic"
# Test the model
message = "You can copy answers during the exam."
prediction = predict_toxicity(message)
print(f"Message: {message}\nPrediction: {prediction}")
- Downloads last month
- 180
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Sk1306/student_chat_toxicity_classifier_model
Base model
FacebookAI/roberta-large
Finetuned
s-nlp/roberta_toxicity_classifier