granite-embedding-english Model Card

This Model2Vec model is a distilled version of the ibm-granite/granite-embedding-30m-english(https://huggingface.co/ibm-granite/granite-embedding-30m-english) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.

Installation

Install model2vec using pip:

pip install model2vec

Usage

Using Model2Vec

The Model2Vec library is the fastest and most lightweight way to run Model2Vec models.

Load this model using the from_pretrained method:

from model2vec import StaticModel

# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("cnmoro/granite-30m-distilled")

# Compute text embeddings
embeddings = model.encode(["Example sentence"])

Using Sentence Transformers

You can also use the Sentence Transformers library to load and use the model:

from sentence_transformers import SentenceTransformer

# Load a pretrained Sentence Transformer model
model = SentenceTransformer("cnmoro/granite-30m-distilled")

# Compute text embeddings
embeddings = model.encode(["Example sentence"])
Downloads last month
5
Safetensors
Model size
12.9M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for cnmoro/granite-30m-distilled

Finetuned
(2)
this model