SMILE for India!

Smile model nderstands the indian nunaces & give more accurate responses wrt. indian context

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by:
  • Funded by [optional]: https://github.com/ombhojane
  • Model type: Quantized
  • Language(s) (NLP): Python, Unsloth
  • License: MIT
  • Finetuned from model [optional]: Qwen/Qwen2.5-1.5B-Instruct

Model Sources [optional]

<!-- from transformers import pipeline
import torch

messages = [
    {"role": "user", "content": "give indian tadka ingrediants"}
]

# Use the GPU if available
device = 0 if torch.cuda.is_available() else -1
pipe = pipeline("text-generation", model="ombhojane/smile-small", device=device)

# Generate longer output text
generated_text = pipe(messages, max_new_tokens=200, num_return_sequences=1)
print(generated_text)
generated_text[0]['generated_text'][1]['content']

-->

Bias, Risks, and Limitations

Parent model is equivalent to a SLM - might lags in some speacial areas

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

Training Details

Training Data

https://huggingface.co/datasets/ombhojane/smile-india

--> -->
Downloads last month
21
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ombhojane/smile-small

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(397)
this model

Dataset used to train ombhojane/smile-small