ritvik77 commited on
Commit
2b60133
·
verified ·
1 Parent(s): 8b97aae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +200 -90
README.md CHANGED
@@ -1,99 +1,209 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - trasformer
5
- - AIAgent
6
- - FunctionCallAgent
7
- - Agenttool
8
- - Agent
9
- - Agentcall
10
- license: apache-2.0
11
- datasets:
12
- - Salesforce/xlam-function-calling-60k
13
- language:
14
- - en
15
- metrics:
16
- - accuracy
17
  base_model:
18
  - mistralai/Mistral-7B-Instruct-v0.3
19
  ---
20
- Model Details
21
- Model Description
22
- This is a fine-tuned LLM based on Mistral-7B-Instruct-v0.3, optimized for agent function calling. Using LoRA (Low-Rank Adaptation), the model efficiently executes structured API calls, enabling AI agents to interact seamlessly with external tools and services.
23
-
24
- Developed by: Ritvik Gaur
25
- Funded by: Self-funded
26
- Shared by: Hugging Face
27
- Model type: Causal Language Model (CLM)
28
- Language(s): English (en)
29
- License: Apache 2.0
30
- Finetuned from: mistralai/Mistral-7B-Instruct-v0.3
31
- Model Sources
32
- Repository: Hugging Face Model Page
33
- Paper (related dataset): xLAM Function-Calling 60K
34
- Demo: [Coming soon]
35
- Uses
36
- Direct Use
37
- Agent function calling for structured API interaction
38
- AI assistants that automate tasks via tool execution
39
- RAG-based applications that require function-aware responses
40
- Downstream Use
41
- Fine-tuned for workflow automation and intelligent API calling
42
- Extendable for custom tool-call generation
43
- Out-of-Scope Use
44
- 🚫 Not intended for general text generation
45
- 🚫 Not designed for open-ended conversational AI
46
-
47
- How to Get Started with the Model
48
- Installation
49
- Install Hugging Face’s transformers library:
50
-
51
- bash
52
- Copy
53
- Edit
54
- pip install transformers torch
55
- Usage Example
56
- python
57
- Copy
58
- Edit
59
- from transformers import AutoModelForCausalLM, AutoTokenizer
60
- import torch
61
 
62
- model_name = "ritvik77/FineTune_LoRA__AgentToolCall_Mistral-7B_Transformer"
63
- tokenizer = AutoTokenizer.from_pretrained(model_name)
64
- model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
- input_text = """User: What's the weather like in New York?
67
- Agent: <function_call> {"tool": "get_weather", "parameters": {"location": "New York"}}"""
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
- inputs = tokenizer(input_text, return_tensors="pt").to("cuda" if torch.cuda.is_available() else "cpu")
70
- outputs = model.generate(**inputs, max_new_tokens=50, do_sample=True, temperature=0.5, top_p=0.9)
71
 
72
  generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
73
  print("Generated Output:\n", generated_text)
74
- Training Details
75
- Training Data
76
- Dataset: Salesforce/xlam-function-calling-60k
77
- Data Structure: 60K+ function call examples in structured JSON format
78
- Purpose: Teach the model structured tool execution
79
- Training Procedure
80
- Fine-Tuned with LoRA (Low-Rank Adaptation)
81
- Mixed-Precision Training (BF16) for memory efficiency
82
- Trained with Hugging Face peft + transformers
83
- Training Hyperparameters
84
- Batch Size: 64
85
- Learning Rate: 2e-5
86
- Epochs: 3
87
- Optimizer: AdamW
88
- Evaluation
89
- Testing Data, Factors & Metrics
90
- Testing Data
91
- Function calling dataset tested against real-world API tasks
92
- Factors
93
- Function selection accuracy
94
- Parameter correctness
95
- Structured JSON compliance
96
- Metrics
97
- Function Call Accuracy: 88.2% on Berkeley Function-Calling Benchmark
98
- Execution Success Rate: 92.5% when tested with API services
99
- Results Summary
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  base_model:
2
  - mistralai/Mistral-7B-Instruct-v0.3
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
+ # Model Card for Model ID
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+
9
+
10
+
11
+ ## Model Details
12
+
13
+ ### Model Description
14
+
15
+ <!-- Provide a longer summary of what this model is. -->
16
+
17
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
18
+
19
+ - **Developed by:** [More Information Needed]
20
+ - **Funded by [optional]:** [More Information Needed]
21
+ - **Shared by [optional]:** [More Information Needed]
22
+ - **Model type:** [More Information Needed]
23
+ - **Language(s) (NLP):** [More Information Needed]
24
+ - **License:** [More Information Needed]
25
+ - **Finetuned from model [optional]:** [More Information Needed]
26
+
27
+ ### Model Sources [optional]
28
+
29
+ <!-- Provide the basic links for the model. -->
30
 
31
+ - **Repository:** [More Information Needed]
32
+ - **Paper [optional]:** [More Information Needed]
33
+ - **Demo [optional]:** [More Information Needed]
34
+
35
+ ## Uses
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer
44
+ import torch
45
 
 
 
46
 
47
  generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
48
  print("Generated Output:\n", generated_text)
49
+
50
+ ### Direct Use
51
+
52
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Downstream Use [optional]
57
+
58
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
59
+
60
+ [More Information Needed]
61
+
62
+ ### Out-of-Scope Use
63
+
64
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ## Bias, Risks, and Limitations
69
+
70
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
71
+
72
+ [More Information Needed]
73
+
74
+ ### Recommendations
75
+
76
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
77
+
78
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
79
+
80
+ ## How to Get Started with the Model
81
+
82
+ Use the code below to get started with the model.
83
+
84
+ [More Information Needed]
85
+
86
+ ## Training Details
87
+
88
+ ### Training Data
89
+
90
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
91
+
92
+ [More Information Needed]
93
+
94
+ ### Training Procedure
95
+
96
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
97
+
98
+ #### Preprocessing [optional]
99
+
100
+ [More Information Needed]
101
+
102
+
103
+ #### Training Hyperparameters
104
+
105
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
106
+
107
+ #### Speeds, Sizes, Times [optional]
108
+
109
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
110
+
111
+ [More Information Needed]
112
+
113
+ ## Evaluation
114
+
115
+ <!-- This section describes the evaluation protocols and provides the results. -->
116
+
117
+ ### Testing Data, Factors & Metrics
118
+
119
+ #### Testing Data
120
+
121
+ <!-- This should link to a Dataset Card if possible. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Factors
126
+
127
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
128
+
129
+ [More Information Needed]
130
+
131
+ #### Metrics
132
+
133
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
134
+
135
+ [More Information Needed]
136
+
137
+ ### Results
138
+
139
+ [More Information Needed]
140
+
141
+ #### Summary
142
+
143
+
144
+
145
+ ## Model Examination [optional]
146
+
147
+ <!-- Relevant interpretability work for the model goes here -->
148
+
149
+ [More Information Needed]
150
+
151
+ ## Environmental Impact
152
+
153
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
154
+
155
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
156
+
157
+ - **Hardware Type:** [More Information Needed]
158
+ - **Hours used:** [More Information Needed]
159
+ - **Cloud Provider:** [More Information Needed]
160
+ - **Compute Region:** [More Information Needed]
161
+ - **Carbon Emitted:** [More Information Needed]
162
+
163
+ ## Technical Specifications [optional]
164
+
165
+ ### Model Architecture and Objective
166
+
167
+ [More Information Needed]
168
+
169
+ ### Compute Infrastructure
170
+
171
+ [More Information Needed]
172
+
173
+ #### Hardware
174
+
175
+ [More Information Needed]
176
+
177
+ #### Software
178
+
179
+ [More Information Needed]
180
+
181
+ ## Citation [optional]
182
+
183
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
184
+
185
+ **BibTeX:**
186
+
187
+ [More Information Needed]
188
+
189
+ **APA:**
190
+
191
+ [More Information Needed]
192
+
193
+ ## Glossary [optional]
194
+
195
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
196
+
197
+ [More Information Needed]
198
+
199
+ ## More Information [optional]
200
+
201
+ [More Information Needed]
202
+
203
+ ## Model Card Authors [optional]
204
+
205
+ [More Information Needed]
206
+
207
+ ## Model Card Contact
208
+
209
+ [More Information Needed]