Token Classification
GLiNER
PyTorch
multilingual
NER
GLiNER
information extraction
encoder
entity recognition
Ihor commited on
Commit
9bd0a23
·
verified ·
1 Parent(s): 5464b19

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -28
README.md CHANGED
@@ -37,6 +37,16 @@ Install or update the gliner package:
37
  pip install gliner -U
38
  ```
39
 
 
 
 
 
 
 
 
 
 
 
40
  Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
41
 
42
  ```python
@@ -92,37 +102,39 @@ outputs = model.batch_predict_with_embeds(texts, entity_embeddings, labels)
92
  ### Benchmarks
93
  Below you can see the table with benchmarking results on various named entity recognition datasets:
94
 
 
 
95
  | Dataset | Score |
96
  |-------------------------|--------|
97
- | ACE 2004 | 26.8% |
98
- | ACE 2005 | 29.2% |
99
- | AnatEM | 25.3% |
100
- | Broad Tweet Corpus | 66.8% |
101
- | CoNLL 2003 | 60.3% |
102
- | FabNER | 21.2% |
103
- | FindVehicle | 28.3% |
104
- | GENIA_NER | 58.3% |
105
- | HarveyNER | 18.3% |
106
- | MultiNERD | 64.7% |
107
- | Ontonotes | 28.4% |
108
- | PolyglotNER | 45.3% |
109
- | TweetNER7 | 35.9% |
110
- | WikiANN en | 53.6% |
111
- | WikiNeural | 73.4% |
112
- | bc2gm | 63.2% |
113
- | bc4chemd | 56.8% |
114
- | bc5cdr | 71.3% |
115
- | ncbi | 64.9% |
116
- | **Average** | **47.0%** |
117
  | | |
118
- | CrossNER_AI | 56.7% |
119
- | CrossNER_literature | 61.5% |
120
- | CrossNER_music | 70.2% |
121
- | CrossNER_politics | 75.6% |
122
- | CrossNER_science | 66.8% |
123
- | mit-movie | 39.9% |
124
- | mit-restaurant | 41.7% |
125
- | **Average (zero-shot benchmark)** | **58.9%** |
126
 
127
  ### Join Our Discord
128
 
 
37
  pip install gliner -U
38
  ```
39
 
40
+ And LLM2Vec package:
41
+ ```bash
42
+ pip install llm2vec
43
+ ```
44
+
45
+ To use this particular Qwen-based model you need different `transformers` package version than llm2vec requires, so install it manually:
46
+ ```bash
47
+ pip install transformers==4.44.1
48
+ ```
49
+
50
  Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
51
 
52
  ```python
 
102
  ### Benchmarks
103
  Below you can see the table with benchmarking results on various named entity recognition datasets:
104
 
105
+ Here’s the updated table with your new data:
106
+
107
  | Dataset | Score |
108
  |-------------------------|--------|
109
+ | ACE 2004 | 31.5% |
110
+ | ACE 2005 | 31.5% |
111
+ | AnatEM | 43.4% |
112
+ | Broad Tweet Corpus | 55.6% |
113
+ | CoNLL 2003 | 60.1% |
114
+ | FabNER | 23.9% |
115
+ | FindVehicle | 30.2% |
116
+ | GENIA_NER | 50.7% |
117
+ | HarveyNER | 16.9% |
118
+ | MultiNERD | 53.3% |
119
+ | Ontonotes | 28.1% |
120
+ | PolyglotNER | 39.2% |
121
+ | TweetNER7 | 35.3% |
122
+ | WikiANN en | 53.2% |
123
+ | WikiNeural | 65.0% |
124
+ | bc2gm | 56.3% |
125
+ | bc4chemd | 54.4% |
126
+ | bc5cdr | 71.0% |
127
+ | ncbi | 63.7% |
128
+ | **Average** | **45.4%** |
129
  | | |
130
+ | CrossNER_AI | 54.0% |
131
+ | CrossNER_literature | 64.4% |
132
+ | CrossNER_music | 63.0% |
133
+ | CrossNER_politics | 69.3% |
134
+ | CrossNER_science | 64.2% |
135
+ | mit-movie | 52.7% |
136
+ | mit-restaurant | 37.6% |
137
+ | **Average (zero-shot benchmark)** | **57.9%** |
138
 
139
  ### Join Our Discord
140