Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ configs:
|
|
32 |
---
|
33 |
|
34 |
|
35 |
-
# Arabic 1 Million Triplets (curated):
|
36 |
This is a curated dataset to use in Arabic ColBERT and SBERT models (among other uses).
|
37 |
In addition to `anchor`, `positive` and `negative` columns, the dataset has two columns: `sim_pos` and `sim_neg` which are cosine
|
38 |
similarities between the anchor (query) and bothe positive and negative examples.
|
@@ -42,12 +42,13 @@ The cosine similarity uses an embedding model by [AbderrahmanSkiredj1/Arabic_tex
|
|
42 |
(inspired by [Omar Nicar](https://huggingface.co/Omartificial-Intelligence-Space)) who made the
|
43 |
[first Arabic SBERT embeddings model](https://huggingface.co/Omartificial-Intelligence-Space/Arabert-all-nli-triplet-Matryoshka) and a triplets dataset based on NLI.
|
44 |
|
45 |
-
# Why another dataset?
|
46 |
While training an Arabic ColBERT model using a sample from the mMARCO dataset, I noticed retrieval issues. It is true all these triplet datasets are translated, but
|
47 |
quality was not up to expectation. I took the dataset used by the embedding model (which is NLI plus some 300K) and 1 million samples from mMARCO and removed
|
48 |
lines that had seperate latin words/phrases and sampled 1 million rows of the combined data. Then I added the similiarity columns and lengths.
|
49 |
This should enable researchers and users to filter based on several criteria (including hard negatives). This is not saying the model used in similarities was perfect.
|
50 |
In some cases, exmples annotated as negative were identical to the anchor/query. Adding the similarities columns took more time than training models.
|
51 |
|
52 |
-
# Arabic SBERT and ColBERT models:
|
53 |
-
Filtered subsets based on certain criteria show impressive perfrmance. Models will be uploaded and linked from here when ready.
|
|
|
|
32 |
---
|
33 |
|
34 |
|
35 |
+
# Arabic 1 Million Triplets (curated):
|
36 |
This is a curated dataset to use in Arabic ColBERT and SBERT models (among other uses).
|
37 |
In addition to `anchor`, `positive` and `negative` columns, the dataset has two columns: `sim_pos` and `sim_neg` which are cosine
|
38 |
similarities between the anchor (query) and bothe positive and negative examples.
|
|
|
42 |
(inspired by [Omar Nicar](https://huggingface.co/Omartificial-Intelligence-Space)) who made the
|
43 |
[first Arabic SBERT embeddings model](https://huggingface.co/Omartificial-Intelligence-Space/Arabert-all-nli-triplet-Matryoshka) and a triplets dataset based on NLI.
|
44 |
|
45 |
+
# Why another dataset?
|
46 |
While training an Arabic ColBERT model using a sample from the mMARCO dataset, I noticed retrieval issues. It is true all these triplet datasets are translated, but
|
47 |
quality was not up to expectation. I took the dataset used by the embedding model (which is NLI plus some 300K) and 1 million samples from mMARCO and removed
|
48 |
lines that had seperate latin words/phrases and sampled 1 million rows of the combined data. Then I added the similiarity columns and lengths.
|
49 |
This should enable researchers and users to filter based on several criteria (including hard negatives). This is not saying the model used in similarities was perfect.
|
50 |
In some cases, exmples annotated as negative were identical to the anchor/query. Adding the similarities columns took more time than training models.
|
51 |
|
52 |
+
# Arabic SBERT and ColBERT models:
|
53 |
+
Filtered subsets based on certain criteria show impressive perfrmance. Models will be uploaded and linked from here when ready.
|
54 |
+
If you saw earlier versions of triplets datasets under this account, they have been removed in favor of this one.
|