Recurv commited on
Commit
06340cf
Β·
verified Β·
1 Parent(s): 8771bb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +128 -3
README.md CHANGED
@@ -1,3 +1,128 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - meta-llama/Llama-3.1-8B-Instruct
5
+ datasets:
6
+ - RecurvAI/Recurv-Medical-Dataset
7
+ language:
8
+ - en
9
+ pipeline_tag: table-question-answering
10
+ tags:
11
+ - medical
12
+ - anamnesis
13
+ ---
14
+ # 🧠 Recurv-Medical-Llama Model
15
+
16
+ [![License](https://img.shields.io/badge/license-MIT-blue?style=flat-square)](https://opensource.org/license/MIT)
17
+ [![HF](https://img.shields.io/badge/HuggingFace-Recurv--Medical--Llama-yellow?style=flat-square&logo=huggingface)](https://huggingface.co/RecurvAI/Recurv-Medical-Lllama)
18
+
19
+ ## **Overview**
20
+
21
+ The **Recurv-Medical-Llama** model is a fine-tuned version of Meta's LLaMa 3.1 8B, developed to provide precise and contextual assistance for healthcare professionals and researchers. This model excels in answering medical queries, assisting in anamnesis, and generating detailed explanations tailored for medical scenarios, leveraging state-of-the-art instruction tuning techniques.
22
+
23
+
24
+ **(Knowledge cut-off date: 22th January, 2025)**
25
+
26
+ ### 🎯 **Key Features**
27
+ - Optimized for medical-specific queries across various specialties.
28
+ - Fine-tuned for clinical and research-oriented workflows.
29
+ - Lightweight parameter-efficient fine-tuning with LoRA (Low-Rank Adaptation).
30
+ - Multi-turn conversation support for context-rich interactions.
31
+ - Generates comprehensive answers and evidence-based suggestions.
32
+
33
+ ---
34
+
35
+ ## πŸš€ **Model Card**
36
+
37
+ | **Parameter** | **Details** |
38
+ |----------------------------|----------------------------------------------------------------------------------------------|
39
+ | **Base Model** | Meta LLaMa 3.1 8B |
40
+ | **Fine-Tuning Framework** | LoRA |
41
+ | **Dataset Size** | 67,299 high-quality Q&A pairs |
42
+ | **Context Length** | 4,096 tokens |
43
+ | **Training Steps** | 100,000 |
44
+ | **Model Size** | 8 billion parameters |
45
+
46
+ ---
47
+
48
+ ## πŸ“Š **Model Architecture**
49
+
50
+ ### **Dataset Sources**
51
+ The dataset comprises high-quality Q&A pairs curated from medical textbooks, research papers, and clinical guidelines.
52
+
53
+ | Source | Description |
54
+ |---------------------------|--------------------------------------------------------------------------------------|
55
+ | **PubMed** | Extracted insights from open-access medical research. |
56
+ | **Clinical Guidelines** | Data sourced from WHO, CDC, and specialty-specific guidelines. |
57
+ | **EHR-Simulated Data** | Synthetic datasets modeled on real-world patient records for anamnesis workflows. |
58
+
59
+ ---
60
+
61
+ ## πŸ› οΈ **Installation and Usage**
62
+
63
+ ### **1. Installation**
64
+
65
+ ```bash
66
+ pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118
67
+ ```
68
+
69
+ ### **2. Load the Model**
70
+
71
+ ```python
72
+ from llama_cpp import Llama
73
+
74
+ llm = Llama(
75
+ model_path="recurv_medical_llama.gguf",
76
+ n_ctx=2048, # Context window
77
+ n_threads=4 # Number of CPU threads to use
78
+ )
79
+ ```
80
+
81
+ ### **3. Run Inference**
82
+
83
+ ```python
84
+ prompt = "What is Paracetamol?"
85
+ output = llm(
86
+ prompt,
87
+ max_tokens=256, # Maximum number of tokens to generate
88
+ temperature=0.5, # Controls randomness (0.0 = deterministic, 1.0 = creative)
89
+ top_p=0.95, # Nucleus sampling parameter
90
+ stop=["###"], # Optional stop words
91
+ echo=True # Include prompt in the output
92
+ )
93
+
94
+ # Print the generated text
95
+ print(output['choices'][0]['text'])
96
+ ```
97
+
98
+ ---
99
+
100
+ ## 🌟 **Try The Model**
101
+ πŸš€ [Recurv-Medical-Llama](https://recurvai.org) on Our Website
102
+
103
+
104
+ ## πŸ™Œ **Contributing**
105
+
106
+ We welcome contributions to enhance Recurv-Medical-Llama. You can:
107
+ - Share feedback or suggestions on the Hugging Face Model Hub
108
+ - Submit pull requests or issues for model improvement.
109
+
110
+ ---
111
+
112
+ ## πŸ“œ **License**
113
+
114
+ This model is licensed under the **MIT License**.
115
+
116
+ ---
117
+
118
+ ## πŸ“ž **Community**
119
+
120
+ For questions or support, connect with us via:
121
+ - **Twitter**: [RecurvAI](https://x.com/recurvai)
122
+ - **Email**: [[email protected]](mailto:[email protected])
123
+
124
+ ---
125
+
126
+ ## 🀝 **Acknowledgments**
127
+
128
+ Special thanks to the Solana ecosystem developers and the open-source community for their invaluable contributions and support.