File size: 2,682 Bytes
b6abdf7 57e1e40 b6abdf7 dd56cb9 b6abdf7 1633bf4 b6abdf7 134b326 b6abdf7 617d4d3 b6abdf7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: mit
language:
- en
pipeline_tag: text-classification
tags:
- finance
metrics:
- accuracy
datasets:
- Moritz-Pfeifer/FT_news_classification
---
[![GitHub Space](https://img.shields.io/badge/GitHub-Space-blue)](https://github.com/Moritz-Pfeifer/News_Inflation_Anchoring)
# Using Llama-2 to Extract Financial Times Opinions on the Bank of England
This model fine-tunes Llama-2-7B on an instruction dataset of Financial Times News which detects journalists' opinions about the Bank of England. It can label sentences as either "positive", "negative" or "other".
#### Intended Use
The model is intended to be used for any analysis where the credibility, reputation, and legitimacy of the Bank of England and its monetary policy play a role.
#### Performance
- Accuracy: 78%
- F1 Score: 0.78
- Precision: 0.78
- Recall: 0.78
### Usage
You can use these models in your own applications by leveraging the Hugging Face Transformers library. Below is a Python code snippet demonstrating how to load and inference with the FT-News classification model:
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
# Load the model
model = AutoPeftModelForCausalLM.from_pretrained("Moritz-Pfeifer/financial-times-classification-llama-2-7b-v1.3")
tokenizer = AutoTokenizer.from_pretrained("Moritz-Pfeifer/financial-times-classification-llama-2-7b-v1.3")
# Create a prompt for the model
def predict_text(test, model, tokenizer):
prompt = f"""
You are given an opinion about the Bank of England (BoE).
Analyze the sentiment of the opinon about the reputation of the Bank of England (BoE) enclosed in square brackets,
determine if it is positive, negative or other, and return the answer as the corresponding sentiment label
"positive" or "negative". If the opinion is not related, return "other".
[{test}] ="""
pipe = pipeline(task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens = 1,
temperature = 0.1,
)
result = pipe(prompt)
answer = result[0]['generated_text'].split("=")[-1]
# print(answer)
if "positive" in answer.lower():
return "positive"
elif "negative" in answer.lower():
return "negative"
else:
return "other"
# Use the model
input_text = 'The report, by Lord Justice Bingham, said the Bank failed to take appropriate action after receiving a series of warnings over many years that fraud was taking place at BCCI.'
predict_text(input_text, model, tokenizer)
``` |