---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: avsolatorio/GIST-small-Embedding-v0
metrics:
- accuracy
widget:
- text: In Florida, some military veterans are now eligible for temporary teaching
    certificates even if they haven't completed a bachelor's degree.
- text: As the total national income falls, the proportion of it absorbed by government
    will rise.
- text: And while local far-right activists appear to have quietly accepted defeat
    over Belgrade Pride, a tame and small-scale annual event, the ferocity of their
    opposition to EuroPride reveals that social attitudes are not much different from
    2001.
- text: 'In return for this extraordinary gift, corporate shareholders owed an implicit
    obligation back to society: namely, that corporations ought to consider not only
    shareholder interests but broader societal interests when making decisions.'
- text: Nonetheless I believe it falls short for legal and historical reasons that
    I lay out in “Woke, Inc”, my book published last year.
pipeline_tag: text-classification
inference: true
model-index:
- name: SetFit with avsolatorio/GIST-small-Embedding-v0
  results:
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: Unknown
      type: unknown
      split: test
    metrics:
    - type: accuracy
      value: 0.844578313253012
      name: Accuracy
---

# SetFit with avsolatorio/GIST-small-Embedding-v0

This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [avsolatorio/GIST-small-Embedding-v0](https://huggingface.co/avsolatorio/GIST-small-Embedding-v0) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.

## Model Details

### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [avsolatorio/GIST-small-Embedding-v0](https://huggingface.co/avsolatorio/GIST-small-Embedding-v0)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->

### Model Sources

- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)

### Model Labels
| Label      | Examples                                                                                                                                                                                                                                                                                                                                                                                     |
|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| subjective | <ul><li>'Stakeholder capitalism poisons democracy and partisan politics poisons capitalism.'</li><li>'There is yet everywhere a deficit in the public revenue because the shrinkage in everything taxable was so sudden and violent.'</li><li>'Our system of unbridled profit-focused capitalism used to serve as perhaps the most important of those sanctuaries, but no longer.'</li></ul> |
| objective  | <ul><li>'But a top buying agent tells me that access to 13 can be gained if you know the right people.'</li><li>'A portion of positive tests around the country is being forwarded to the agency for genetic sequencing, according to a report by CBS News.'</li><li>'asked American Federation of Teachers President Randi Weingarten.'</li></ul>                                           |

## Evaluation

### Metrics
| Label   | Accuracy |
|:--------|:---------|
| **all** | 0.8446   |

## Uses

### Direct Use for Inference

First install the SetFit library:

```bash
pip install setfit
```

Then you can load this model and run inference.

```python
from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("As the total national income falls, the proportion of it absorbed by government will rise.")
```

<!--
### Downstream Use

*List how someone could finetune this model on their own dataset.*
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Set Metrics
| Training set | Min | Median  | Max |
|:-------------|:----|:--------|:----|
| Word count   | 1   | 22.9219 | 77  |

| Label      | Training Sample Count |
|:-----------|:----------------------|
| objective  | 128                   |
| subjective | 128                   |

### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False

### Training Results
| Epoch  | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0010 | 1    | 0.2715        | -               |
| 0.0484 | 50   | 0.2469        | -               |
| 0.0969 | 100  | 0.2247        | -               |
| 0.1453 | 150  | 0.0501        | -               |
| 0.1938 | 200  | 0.0039        | -               |
| 0.2422 | 250  | 0.0014        | -               |
| 0.2907 | 300  | 0.0011        | -               |
| 0.3391 | 350  | 0.0014        | -               |
| 0.3876 | 400  | 0.001         | -               |
| 0.4360 | 450  | 0.0009        | -               |
| 0.4845 | 500  | 0.0008        | -               |
| 0.5329 | 550  | 0.0008        | -               |
| 0.5814 | 600  | 0.0008        | -               |
| 0.6298 | 650  | 0.0007        | -               |
| 0.6783 | 700  | 0.0007        | -               |
| 0.7267 | 750  | 0.0006        | -               |
| 0.7752 | 800  | 0.0007        | -               |
| 0.8236 | 850  | 0.0006        | -               |
| 0.8721 | 900  | 0.0005        | -               |
| 0.9205 | 950  | 0.0007        | -               |
| 0.9690 | 1000 | 0.0007        | -               |

### Framework Versions
- Python: 3.11.9
- SetFit: 1.0.3
- Sentence Transformers: 3.0.0
- Transformers: 4.40.2
- PyTorch: 2.1.2
- Datasets: 2.19.1
- Tokenizers: 0.19.1

## Citation

### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
```

<!--
## Glossary

*Clearly define terms in order to be accessible across audiences.*
-->

<!--
## Model Card Authors

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->