File size: 4,810 Bytes
06340cf 7511c4f 06340cf caf60a2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
license: mit
base_model:
- meta-llama/Llama-3.1-8B-Instruct
datasets:
- RecurvAI/Recurv-Medical-Dataset
language:
- en
pipeline_tag: text-generation
tags:
- medical
- anamnesis
---
# π§ Recurv-Medical-Llama Model
[](https://opensource.org/license/MIT)
[](https://huggingface.co/RecurvAI/Recurv-Medical-Lllama)
## **Overview**
The **Recurv-Medical-Llama** model is a fine-tuned version of Meta's LLaMa 3.1 8B, developed to provide precise and contextual assistance for healthcare professionals and researchers. This model excels in answering medical queries, assisting in anamnesis, and generating detailed explanations tailored for medical scenarios, leveraging state-of-the-art instruction tuning techniques.
**(Knowledge cut-off date: 22th January, 2025)**
### π― **Key Features**
- Optimized for medical-specific queries across various specialties.
- Fine-tuned for clinical and research-oriented workflows.
- Lightweight parameter-efficient fine-tuning with LoRA (Low-Rank Adaptation).
- Multi-turn conversation support for context-rich interactions.
- Generates comprehensive answers and evidence-based suggestions.
---
## π **Model Card**
| **Parameter** | **Details** |
|----------------------------|----------------------------------------------------------------------------------------------|
| **Base Model** | Meta LLaMa 3.1 8B |
| **Fine-Tuning Framework** | LoRA |
| **Dataset Size** | 67,299 high-quality Q&A pairs |
| **Context Length** | 4,096 tokens |
| **Training Steps** | 100,000 |
| **Model Size** | 8 billion parameters |
---
## π **Model Architecture**
### **Dataset Sources**
The dataset comprises high-quality Q&A pairs curated from medical textbooks, research papers, and clinical guidelines.
| Source | Description |
|---------------------------|--------------------------------------------------------------------------------------|
| **PubMed** | Extracted insights from open-access medical research. |
| **Clinical Guidelines** | Data sourced from WHO, CDC, and specialty-specific guidelines. |
| **EHR-Simulated Data** | Synthetic datasets modeled on real-world patient records for anamnesis workflows. |
---
## π οΈ **Installation and Usage**
### **1. Installation**
```bash
pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118
```
### **2. Load the Model**
```python
from llama_cpp import Llama
llm = Llama(
model_path="recurv_medical_llama.gguf",
n_ctx=2048, # Context window
n_threads=4 # Number of CPU threads to use
)
```
### **3. Run Inference**
```python
prompt = "What is Paracetamol?"
output = llm(
prompt,
max_tokens=256, # Maximum number of tokens to generate
temperature=0.5, # Controls randomness (0.0 = deterministic, 1.0 = creative)
top_p=0.95, # Nucleus sampling parameter
stop=["###"], # Optional stop words
echo=True # Include prompt in the output
)
# Print the generated text
print(output['choices'][0]['text'])
```
---
## π **Try The Model**
π [Recurv-Medical-Llama](https://recurvai.org) on Our Website
## π **Contributing**
We welcome contributions to enhance Recurv-Medical-Llama. You can:
- Share feedback or suggestions on the Hugging Face Model Hub
- Submit pull requests or issues for model improvement.
---
## π **License**
This model is licensed under the **MIT License**.
---
## π **Community**
For questions or support, connect with us via:
- **Twitter**: [RecurvAI](https://x.com/recurvai)
- **Email**: [[email protected]](mailto:[email protected])
---
## π€ **Acknowledgments**
Special thanks to the medical community and researchers for their valuable insights and support in building this model. Together, weβre advancing AI in healthcare. |