Astaxanthin commited on
Commit
2b60867
·
verified ·
1 Parent(s): e1e3e0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -127
README.md CHANGED
@@ -54,7 +54,7 @@ transform = transforms.Compose([
54
  transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
55
  ])
56
 
57
- example_image_path = './quick_start/example.tif'
58
  example_text = ['an H&E image of breast invasive carcinoma.', 'an H&E image of normal tissue.', 'an H&E image of lung adenocarcinoma.']
59
 
60
  img_input = transform(Image.open(example_image_path).convert('RGB')).unsqueeze(0)
@@ -64,63 +64,6 @@ img_feature = model.encode_image(img_input)
64
  text_feature = model.encode_text(token_input)
65
 
66
  ```
67
- <!--
68
- ### Downstream Use [optional]
69
-
70
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
71
-
72
- [More Information Needed]
73
-
74
- ### Out-of-Scope Use
75
-
76
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
77
-
78
- [More Information Needed]
79
-
80
- ## Bias, Risks, and Limitations
81
-
82
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
83
-
84
- [More Information Needed]
85
-
86
- ### Recommendations
87
-
88
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
89
-
90
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
91
-
92
- ## How to Get Started with the Model
93
-
94
- Use the code below to get started with the model.
95
-
96
- [More Information Needed]
97
-
98
- ## Training Details
99
-
100
- ### Training Data
101
-
102
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
103
-
104
- [More Information Needed]
105
-
106
- ### Training Procedure
107
-
108
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
109
-
110
- #### Preprocessing [optional]
111
-
112
- [More Information Needed]
113
-
114
-
115
- #### Training Hyperparameters
116
-
117
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
118
-
119
- #### Speeds, Sizes, Times [optional]
120
-
121
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
122
-
123
- [More Information Needed] -->
124
 
125
  ## Evaluation
126
 
@@ -134,18 +77,7 @@ Use the code below to get started with the model.
134
 
135
  We present benchmark results for a range of representative tasks. A complete set of benchmarks can be found in the [paper](https://arxiv.org/abs/2412.18***). These results will be updated with each new iteration of KEEP.
136
 
137
- <!-- #### Factors
138
-
139
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
140
 
141
- [More Information Needed]
142
-
143
- #### Metrics
144
-
145
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
146
-
147
- [More Information Needed]
148
- -->
149
  ### Results
150
 
151
 
@@ -183,42 +115,6 @@ We present benchmark results for a range of representative tasks. A complete set
183
 
184
  Validated on 18 diverse benchmarks with more than 14,000 whole slide images (WSIs), KEEP achieves state-of-the-art performance in zero-shot cancer diagnostic tasks. Notably, for cancer detection, KEEP demonstrates an average sensitivity of 89.8% at a specificity of 95.0% across 7 cancer types, significantly outperforming vision-only foundation models and highlighting its promising potential for clinical application. For cancer subtyping, KEEP achieves a median balanced accuracy of 0.456 in subtyping 30 rare brain cancers, indicating strong generalizability for diagnosing rare tumors.
185
 
186
- <!--
187
- ## Model Examination [optional]
188
-
189
- <!-- Relevant interpretability work for the model goes here -->
190
-
191
- [More Information Needed]
192
-
193
- ## Environmental Impact
194
-
195
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
196
-
197
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
198
-
199
- - **Hardware Type:** [More Information Needed]
200
- - **Hours used:** [More Information Needed]
201
- - **Cloud Provider:** [More Information Needed]
202
- - **Compute Region:** [More Information Needed]
203
- - **Carbon Emitted:** [More Information Needed]
204
-
205
- ## Technical Specifications [optional]
206
-
207
- ### Model Architecture and Objective
208
-
209
- [More Information Needed]
210
-
211
- ### Compute Infrastructure
212
-
213
- [More Information Needed]
214
-
215
- #### Hardware
216
-
217
- [More Information Needed]
218
-
219
- #### Software
220
-
221
- [More Information Needed] -->
222
 
223
  ## Citation [optional]
224
 
@@ -232,25 +128,3 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
232
  journal={arXiv preprint arXiv:2412.13126},
233
  year={2024}
234
  }
235
- <!--
236
- **APA:**
237
-
238
- [More Information Needed]
239
-
240
- ## Glossary [optional]
241
-
242
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
243
-
244
- [More Information Needed]
245
-
246
- ## More Information [optional]
247
-
248
- [More Information Needed]
249
-
250
- ## Model Card Authors [optional]
251
-
252
- [More Information Needed]
253
-
254
- ## Model Card Contact
255
-
256
- [More Information Needed] -->
 
54
  transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
55
  ])
56
 
57
+ example_image_path = './example.tif'
58
  example_text = ['an H&E image of breast invasive carcinoma.', 'an H&E image of normal tissue.', 'an H&E image of lung adenocarcinoma.']
59
 
60
  img_input = transform(Image.open(example_image_path).convert('RGB')).unsqueeze(0)
 
64
  text_feature = model.encode_text(token_input)
65
 
66
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
  ## Evaluation
69
 
 
77
 
78
  We present benchmark results for a range of representative tasks. A complete set of benchmarks can be found in the [paper](https://arxiv.org/abs/2412.18***). These results will be updated with each new iteration of KEEP.
79
 
 
 
 
80
 
 
 
 
 
 
 
 
 
81
  ### Results
82
 
83
 
 
115
 
116
  Validated on 18 diverse benchmarks with more than 14,000 whole slide images (WSIs), KEEP achieves state-of-the-art performance in zero-shot cancer diagnostic tasks. Notably, for cancer detection, KEEP demonstrates an average sensitivity of 89.8% at a specificity of 95.0% across 7 cancer types, significantly outperforming vision-only foundation models and highlighting its promising potential for clinical application. For cancer subtyping, KEEP achieves a median balanced accuracy of 0.456 in subtyping 30 rare brain cancers, indicating strong generalizability for diagnosing rare tumors.
117
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
118
 
119
  ## Citation [optional]
120
 
 
128
  journal={arXiv preprint arXiv:2412.13126},
129
  year={2024}
130
  }