yadonglu
commited on
Commit
·
de06658
1
Parent(s):
71f73b6
fix readme
Browse files
README.md
CHANGED
@@ -3,13 +3,13 @@ library_name: transformers
|
|
3 |
license: mit
|
4 |
pipeline_tag: image-text-to-text
|
5 |
---
|
6 |
-
📢 [[GitHub Repo](https://github.com/microsoft/OmniParser/tree/master)] [[OmniParser V2 Blog Post](https://www.microsoft.com/en-us/research/articles/omniparser-v2-turning-any-llm-into-a-computer-use-agent/)]
|
7 |
|
8 |
# Model Summary
|
9 |
OmniParser is a general screen parsing tool, which interprets/converts UI screenshot to structured format, to improve existing LLM based UI agent.
|
10 |
Training Datasets include: 1) an interactable icon detection dataset, which was curated from popular web pages and automatically annotated to highlight clickable and actionable regions, and 2) an icon description dataset, designed to associate each UI element with its corresponding function.
|
11 |
|
12 |
-
This model hub includes a finetuned version of YOLOv8 and a finetuned
|
13 |
|
14 |
# What's new in V2?
|
15 |
- Larger and cleaner set of icon caption + grounding dataset
|
|
|
3 |
license: mit
|
4 |
pipeline_tag: image-text-to-text
|
5 |
---
|
6 |
+
📢 [[GitHub Repo](https://github.com/microsoft/OmniParser/tree/master)] [[OmniParser V2 Blog Post](https://www.microsoft.com/en-us/research/articles/omniparser-v2-turning-any-llm-into-a-computer-use-agent/)] [Huggingface demo (soon)]
|
7 |
|
8 |
# Model Summary
|
9 |
OmniParser is a general screen parsing tool, which interprets/converts UI screenshot to structured format, to improve existing LLM based UI agent.
|
10 |
Training Datasets include: 1) an interactable icon detection dataset, which was curated from popular web pages and automatically annotated to highlight clickable and actionable regions, and 2) an icon description dataset, designed to associate each UI element with its corresponding function.
|
11 |
|
12 |
+
This model hub includes a finetuned version of YOLOv8 and a finetuned Florence-2 base model on the above dataset respectively. For more details of the models used and finetuning, please refer to the [paper](https://arxiv.org/abs/2408.00203).
|
13 |
|
14 |
# What's new in V2?
|
15 |
- Larger and cleaner set of icon caption + grounding dataset
|