granite-embedding-english Model Card
This Model2Vec model is a distilled version of the ibm-granite/granite-embedding-30m-english(https://huggingface.co/ibm-granite/granite-embedding-30m-english) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
Installation
Install model2vec using pip:
pip install model2vec
Usage
Using Model2Vec
The Model2Vec library is the fastest and most lightweight way to run Model2Vec models.
Load this model using the from_pretrained
method:
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("cnmoro/granite-30m-distilled")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
Using Sentence Transformers
You can also use the Sentence Transformers library to load and use the model:
from sentence_transformers import SentenceTransformer
# Load a pretrained Sentence Transformer model
model = SentenceTransformer("cnmoro/granite-30m-distilled")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
- Downloads last month
- 5
Model tree for cnmoro/granite-30m-distilled
Base model
ibm-granite/granite-embedding-30m-english