CohenQu commited on
Commit
09f06ab
·
verified ·
1 Parent(s): 6a00f74

Model save

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- datasets: hf-cmu-collab/DeepScaleR-1.5B-Preview_alpha_0.1_Iteration1_on-policy_GRPO
3
  library_name: transformers
4
  model_name: DeepSeek-R1-Distill-Qwen-7B-GRPO
5
  tags:
@@ -11,7 +11,7 @@ licence: license
11
 
12
  # Model Card for DeepSeek-R1-Distill-Qwen-7B-GRPO
13
 
14
- This model is a fine-tuned version of [None](https://huggingface.co/None) on the [hf-cmu-collab/DeepScaleR-1.5B-Preview_alpha_0.1_Iteration1_on-policy_GRPO](https://huggingface.co/datasets/hf-cmu-collab/DeepScaleR-1.5B-Preview_alpha_0.1_Iteration1_on-policy_GRPO) dataset.
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
@@ -27,7 +27,7 @@ print(output["generated_text"])
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuxiao98/backtrack-rl/runs/d4bwe9aj)
31
 
32
 
33
  This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
 
1
  ---
2
+ base_model: agentica-org/DeepScaleR-1.5B-Preview
3
  library_name: transformers
4
  model_name: DeepSeek-R1-Distill-Qwen-7B-GRPO
5
  tags:
 
11
 
12
  # Model Card for DeepSeek-R1-Distill-Qwen-7B-GRPO
13
 
14
+ This model is a fine-tuned version of [agentica-org/DeepScaleR-1.5B-Preview](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuxiao98/backtrack-rl/runs/w6uctuio)
31
 
32
 
33
  This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).