How to evaluate the differences in response quality between the base Mistral-7B-v0.1 model and the fine-tuned e5-mistral-7b-instruct model
#48
by
emilylilil
- opened
Hi,
According to the documentation, e5-mistral-7b-instruct was fine-tuned with LoRA on multilingual datasets. While it has some multilingual capabilities, it is still mainly recommended for English tasks, whereas the base Mistral-7B-v0.1 is primarily trained on English data.
I have the following question:
What are the key differences between Mistral-7B-v0.1 and e5-mistral-7b-instruct and how to compare it?
I’d appreciate any insights from the community on these points. Thanks in advance!