Spaces:
Runtime error
Runtime error
Update About.py
Browse files
About.py
CHANGED
@@ -19,7 +19,7 @@ st.markdown(
|
|
19 |
needs to be queried for all samples which is computationally/financially [expensive](https://cloud.google.com/vision/pricing). Here, we show that the documents
|
20 |
can be preprocessed using just 4% of the total OCR queries.
|
21 |
|
22 |
-
π Select **Denoise** in the sidebar to see document preprocessing with 100\%, 8\% and 4\%
|
23 |
"""
|
24 |
)
|
25 |
|
@@ -31,38 +31,4 @@ st.markdown(
|
|
31 |
# ### See more complex demos
|
32 |
# - Use a neural net to [analyze the Udacity Self-driving Car Image
|
33 |
# Dataset](https://github.com/streamlit/demo-self-driving)
|
34 |
-
# - Explore a [New York City rideshare dataset](https://github.com/streamlit/demo-uber-nyc-pickups)
|
35 |
-
|
36 |
-
# st.write("")
|
37 |
-
# st.write("")
|
38 |
-
# st.write("")
|
39 |
-
|
40 |
-
# st.markdown("##### This app allows you to compare, from a given picture, the results of different solutions:")
|
41 |
-
# st.markdown("##### *EasyOcr, PaddleOCR, MMOCR, Tesseract*")
|
42 |
-
# st.write("")
|
43 |
-
# st.write("")
|
44 |
-
|
45 |
-
# st.markdown(''' The 1st step is to choose the language for the text recognition (not all solutions \
|
46 |
-
# support the same languages), and then choose the picture to consider. It is possible to upload a file, \
|
47 |
-
# to take a picture, or to use a demo file. \
|
48 |
-
# It is then possible to change the default values for the text area detection process, \
|
49 |
-
# before launching the detection task for each solution.''')
|
50 |
-
# st.write("")
|
51 |
-
|
52 |
-
# st.markdown(''' The different results are then presented. The 2nd step is to choose one of these \
|
53 |
-
# detection results, in order to carry out the text recognition process there. It is also possible to change \
|
54 |
-
# the default settings for each solution.''')
|
55 |
-
# st.write("")
|
56 |
-
|
57 |
-
# st.markdown("###### The recognition results appear in 2 formats:")
|
58 |
-
# st.markdown(''' - a visual format resumes the initial image, replacing the detected areas with \
|
59 |
-
# the recognized text. The background is + or - strongly colored in green according to the \
|
60 |
-
# confidence level of the recognition.
|
61 |
-
# A slider allows you to change the font size, another \
|
62 |
-
# allows you to modify the confidence threshold above which the text color changes: if it is at \
|
63 |
-
# 70% for example, then all the texts with a confidence threshold higher or equal to 70 will appear \
|
64 |
-
# in white, in black otherwise.''')
|
65 |
-
|
66 |
-
# st.markdown(" - a detailed format presents the results in a table, for each text box detected. \
|
67 |
-
# It is possible to download this results in a local csv file.")
|
68 |
-
|
|
|
19 |
needs to be queried for all samples which is computationally/financially [expensive](https://cloud.google.com/vision/pricing). Here, we show that the documents
|
20 |
can be preprocessed using just 4% of the total OCR queries.
|
21 |
|
22 |
+
π Select **Denoise** in the sidebar to see document preprocessing with 100\%, 8\% and 4\% OCR query budget.
|
23 |
"""
|
24 |
)
|
25 |
|
|
|
31 |
# ### See more complex demos
|
32 |
# - Use a neural net to [analyze the Udacity Self-driving Car Image
|
33 |
# Dataset](https://github.com/streamlit/demo-self-driving)
|
34 |
+
# - Explore a [New York City rideshare dataset](https://github.com/streamlit/demo-uber-nyc-pickups)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|