File size: 1,519 Bytes
3203a45
 
 
4d20550
6acc0a9
4d20550
6acc0a9
 
 
 
4d20550
6acc0a9
 
 
4d20550
6acc0a9
 
 
4d20550
 
 
 
6acc0a9
 
 
4d20550
6acc0a9
 
 
 
 
9f48230
6acc0a9
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: mit
---
## CentralBankRoBERTa

CentralBankRoBERTA is a large language model. It combines an economic agent classifier that distinguishes five basic macroeconomic agents with a binary sentiment classifier that identifies the emotional content of sentences in central bank communications.


#### Overview

The AudienceClassifier model is designed to classify the target audience of a given text. It can determine whether the text is adressing households, firms, the financial sector, the government or the central bank itself. This model is based on a state-of-the-art deep learning architecture and has been fine-tuned on a diverse and extensive dataset to provide accurate predictions.

#### Intended Use

The AudienceClassifier model is intended to be used in various applications where content categorization based on target audiences is essential. 

#### Performance

- Accuracy: 93%
- F1 Score: 0.93
- Precision: 0.93
- Recall: 0.93

### Usage

You can use these models in your own applications by leveraging the Hugging Face Transformers library. Below is a Python code snippet demonstrating how to load and use the AudienceClassifier model:

```python
from transformers import pipeline

# Load the AudienceClassifier model
audience_classifier = pipeline("text-classification", model="Moritz-Pfeifer/CentralBankRoBERTa-audience-classifier")

# Perform audience classification
audience_result = audience_classifier("Your text goes here.")
print("Audience Classification:", audience_result[0]['label'])