File size: 1,129 Bytes
16ff075 5350813 b6d47c5 5d78336 2301a22 16ff075 5350813 16ff075 5350813 16ff075 5350813 16ff075 5350813 16ff075 c958843 16ff075 5350813 c958843 5350813 16ff075 5350813 16ff075 5350813 5514fa2 588fcec 2b95880 588fcec 5514fa2 5350813 16ff075 a16f7e5 16ff075 5514fa2 16ff075 5514fa2 369931f 5514fa2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
tags:
- roberta
library_name: peft
datasets:
- rotten_tomatoes
pipeline_tag: text-classification
---
# Adapter `solwol/roberta-sentiment-classifier-peft` for roberta-base
<!-- PERT adapter for sentiment-classification. -->
## Usage
First, install `transformers` and `peft`:
```
pip install -U transformers peft
```
Now, the peft adapter can be loaded and activated like this:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSequenceClassification, RobertaConfig
config = RobertaConfig.from_pretrained(
"roberta-base",
id2label={0: "π", 1: "π"}
)
model = AutoModelForSequenceClassification.from_pretrained("roberta-base", config=config)
model = PeftModel.from_pretrained(model, "solwol/roberta-sentiment-classifier-peft")
```
Next, to perform sentiment classification:
```python
from transformers import AutoTokenizer, TextClassificationPipeline
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
classifier("This is awesome!")
[{'label': 'π', 'score': 0.9765560626983643}]
```
|