File size: 1,919 Bytes
3c5a6e8
 
 
 
 
 
2555730
 
3c5a6e8
0dd1cc5
2555730
 
302612e
 
 
 
 
3c5a6e8
302612e
 
 
 
 
 
 
3c5a6e8
 
 
 
 
 
 
 
 
2555730
 
 
18f0c87
2555730
18f0c87
2555730
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: eupl-1.1
language:
- en
size_categories:
- 100K<n<1M
task_categories:
- feature-extraction
---

European legislation from CELLAR/EUROVOC, English entries only of https://huggingface.co/datasets/EuropeanParliament/Eurovoc. 
This data is enriched with embeddings, ready for semantic search.

Last update 16.05.2024: 352011 entries.

## Usage 

### With Pandas / Polars
Simply download the parquet file and read with pandas or polars.
```python
import pandas as pd # or import polars as pd
df = pd.read_parquet("CELLAR_EN_16_05_2024.parquet")
df
```

### With HF datsets

```python 
from datasets import load_dataset
ds = load_dataset("do-me/Eurovoc_en")
df = ds["train"].to_pandas()
df
```

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/eAINKZ8HvQuCHI7WxD-HQ.png)

## Semantic Search 
Testwise, the first 512 tokens of every text have been inferenced with the model2vec library and https://huggingface.co/minishlab/M2V_base_output model from @minishlab.
After loading the dataset, use the column `embeddings` for semantic search in this way. See the Jupyter notebook for the full processing script.
You can re-run it on consumer-grade hardware without GPU. Inferencing took `Wall time: 1min 36s` on an M3 Max. Inferecing the entire text takes 50 mins but yields poor quality, currently investigating.

```python
from model2vec import StaticModel
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity

model = StaticModel.from_pretrained("minishlab/M2V_base_output")

query = "social democracy"
quer_emb = model.encode(query)

embeddings_matrix = np.stack(df['embeddings'].to_numpy())
df["cos_sim"] = cosine_similarity(embeddings_matrix, quer_emb.reshape(1, -1))[:, 0]
df = df.sort_values("cos_sim", ascending=False)
df
```

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/wTvM35qwFcn5lw__JyVli.png)