Dango233 commited on
Commit
99b0244
·
1 Parent(s): 421f494

CoT-Lab demo ver.

Browse files
Files changed (6) hide show
  1. README.md +90 -0
  2. README_zh.md +89 -0
  3. app.py +322 -0
  4. lang.py +61 -0
  5. requirements.txt +3 -0
  6. styles.css +1 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: "CoT-Lab: Human-AI Co-Thinking Laboratory"
3
+ emoji: "🤖"
4
+ colorFrom: "blue"
5
+ colorTo: "gray"
6
+ sdk: "gradio"
7
+ python_version: "3.13"
8
+ sdk_version: "5.13.1"
9
+ app_file: "app.py"
10
+ models:
11
+ - "deepseek-ai/DeepSeek-R1"
12
+ tags:
13
+ - "writing-assistant"
14
+ - "multilingual"
15
+ license: "mit"
16
+ ---
17
+
18
+ # CoT-Lab: Human-AI Co-Thinking Laboratory
19
+ [Huggingface Spaces 🤗](https://huggingface.co/spaces/Intelligent-Internet/CoT-Lab) | [GitHub Repository 🌐](https://github.com/Intelligent-Internet/CoT-Lab)
20
+ [中文README](README_zh.md)
21
+
22
+ **Sync your thinking with AI reasoning models to achieve deeper cognitive alignment**
23
+ Follow, learn, and iterate the thought within one turn
24
+
25
+ ## 🌟 Introduction
26
+ CoT-Lab is an experimental interface exploring new paradigms in human-AI collaboration. Based on **Cognitive Load Theory** and **Active Learning** principles, it creates a "**Thought Partner**" relationship by enabling:
27
+
28
+ - 🧠 **Cognitive Synchronization**
29
+ Slow-paced AI output aligned with human information processing speed
30
+ - ✍️ **Collaborative Thought Weaving**
31
+ Human active participation in AI's Chain of Thought
32
+
33
+
34
+ ** This project is part of ongoing exploration. Under active development, discussion and feedback are welcome! **
35
+
36
+ ## 🛠 Usage Guide
37
+ ### Basic Operation
38
+ 1. **Set Initial Prompt**
39
+ Describe your prompy in the input box (e.g., "Explain quantum computing basics")
40
+
41
+ 2. **Adjust Cognitive Parameters**
42
+ - ⏱ **Thought Sync Throughput**: tokens/sec - 5:Read-aloud, 10:Follow-along, 50:Skim
43
+ - 📏 **Human Thinking Cadence**: Auto-pause every X paragraphs (Default off - recommended for active learning)
44
+
45
+ 3. **Interactive Workflow**
46
+ - Click `Generate` to start co-thinking, follow the thinking process
47
+ - Edit AI's reasoning when it pauses - or pause it anytime with `Shift+Enter`
48
+ - Use `Shift+Enter` to hand over to AI again
49
+
50
+ ## 🧠 Design Philosophy
51
+ - **Cognitive Load Optimization**
52
+ Information chunking (Chunking) adapts to working memory limits, serialized information presentation reduces cognitive load from visual searching
53
+
54
+ - **Active Learning Enhancement**
55
+ Direct manipulation interface promotes deeper cognitive engagement
56
+
57
+ - **Distributed Cognition**
58
+ Explore hybrid human-AI problem-solving paradiam
59
+
60
+ ## 📥 Installation & Deployment
61
+ Local deployment is (currently) required if you want to work with locally hosted LLMs.
62
+
63
+ **Prerequisites**: Python 3.11+ | Valid [Deepseek API Key](https://platform.deepseek.com/) or OPENAI chat.completions compatible API.
64
+
65
+ ```bash
66
+ # Clone repository
67
+ git clone https://github.com/Intelligent-Internet/CoT-Lab
68
+ cd CoT-Lab
69
+
70
+ # Install dependencies
71
+ pip install -r requirements.txt
72
+
73
+ # Configure environment
74
+ API_KEY=sk-****
75
+ API_URL=https://api.deepseek.com/beta
76
+ API_MODEL=deepseek-reasoner
77
+
78
+ # Launch application
79
+ python app.py
80
+ ```
81
+
82
+ ### API Settings for serving
83
+ You can set environment variable `API_KEY` to hide the key from frontend.
84
+ Only OPENAI chat.completions compatible API is supported now.
85
+
86
+ ## 📄 License
87
+ MIT License © 2024 [ii.inc]
88
+
89
+ ## Contact
90
+ [email protected] (Dango233)
README_zh.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: "CoT-Lab: 人机协同思考实验室"
3
+ emoji: "🤖"
4
+ colorFrom: "blue"
5
+ colorTo: "gray"
6
+ sdk: "gradio"
7
+ python_version: "3.13"
8
+ sdk_version: "5.13.1"
9
+ app_file: "app.py"
10
+ models:
11
+ - "deepseek-ai/DeepSeek-R1"
12
+ tags:
13
+ - "写作助手"
14
+ - "多语言"
15
+ license: "mit"
16
+ ---
17
+
18
+ # CoT-Lab: 人机协同思考实验室
19
+ [Huggingface空间 🤗](https://huggingface.co/spaces/Intelligent-Internet/CoT-Lab) | [GitHub仓库 🌐](https://github.com/Intelligent-Internet/CoT-Lab)
20
+ [English README](README.md)
21
+
22
+ **通过同步人类与AI的思考过程,实现深层次的认知对齐**
23
+ 在一轮对话中跟随、学习、迭代思维链
24
+
25
+ ## 🌟 项目介绍
26
+ CoT-Lab是一个探索人机协作新范式的实验性界面,基于**认知负荷理论**和**主动学习**原则,致力于探索人与AI的"**思考伙伴**"关系。
27
+
28
+ - 🧠 **认知同步**
29
+ 调节AI输出速度,匹配不同场景下的人类信息处理速度匹配
30
+ - ✍️ **思维编织**
31
+ 人类主动参与AI的思维链
32
+
33
+ ** 探索性实验项目,正在积极开发中,欢迎讨论与反馈! **
34
+
35
+ ## 🛠 使用指南
36
+ ### 基本操作
37
+ 1. **设置初始提示**
38
+ 在输入框描述您的问题(例如"解释量子计算基础")
39
+
40
+ 2. **调整认知参数**
41
+ - ⏱ **思考同步速度**:词元/秒 - 5:朗读, 10:跟随, 50:跳读
42
+ - 📏 **人工思考节奏**:每X段落自动暂停(默认关闭 - 推荐主动学习场景使用)
43
+
44
+ 3. **交互工作流**
45
+ - 点击`生成`开始协同思考,跟随思考过程
46
+ - AI暂停时可编辑推理过程 - 或随时使用`Shift+Enter`暂停AI输出,进入思考、编辑模式
47
+ - 思考、编辑后,使用`Shift+Enter`交还控制权给AI
48
+
49
+ ## 🧠 设计理念
50
+ - **认知负荷优化**
51
+ 信息组块化(Chunking)适配工作记忆限制,序列化信息呈现降低视觉搜索带来的认知负荷
52
+
53
+ - **主动学习增强**
54
+ 直接操作思维链,促进深度认知投入
55
+
56
+ - **分布式认知**
57
+ 探索人机携作的问题解决范式
58
+
59
+ ## 📥 安装部署
60
+ 如希望使用本地部署的大语言模型,您(暂时)需要克隆本项目并在本地运行。
61
+
62
+ **环境要求**:Python 3.11+ | 有效的[Deepseek API密钥](https://platform.deepseek.com/) 或OpenAI chat.completions 接口兼容的API接口。
63
+
64
+ ```bash
65
+ # 克隆仓库
66
+ git clone https://github.com/Intelligent-Internet/CoT-Lab
67
+ cd CoT-Lab
68
+
69
+ # 安装依赖
70
+ pip install -r requirements.txt
71
+
72
+ # 配置环境
73
+ API_KEY=sk-****
74
+ API_URL=https://api.deepseek.com/beta
75
+ API_MODEL=deepseek-reasoner
76
+
77
+ # 启动应用
78
+ python app.py
79
+ ```
80
+
81
+ ### API服务设置
82
+ 可通过环境变量`API_KEY`在前端隐藏密钥。
83
+ 目前仅支持OPENAI格式的补全API
84
+
85
+ ## 📄 许可协议
86
+ MIT License © 2024 [ii.inc]
87
+
88
+ ## Contact
89
+ [email protected] (Dango233)
app.py ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from openai import OpenAI
2
+ from dotenv import load_dotenv
3
+ import os
4
+ import threading
5
+ import time
6
+ import gradio as gr
7
+ from lang import LANGUAGE_CONFIG
8
+
9
+ # 环境变量预校验
10
+ load_dotenv(override=True)
11
+ required_env_vars = ["API_KEY", "API_URL", "API_MODEL"]
12
+ missing_vars = [var for var in required_env_vars if not os.getenv(var)]
13
+ if missing_vars:
14
+ raise EnvironmentError(f"Missing required environment variables: {', '.join(missing_vars)}")
15
+
16
+
17
+ class AppConfig:
18
+ DEFAULT_THROUGHPUT = 10
19
+ SYNC_THRESHOLD_DEFAULT = 0
20
+ API_TIMEOUT = 20
21
+ LOADING_DEFAULT = "✅ Ready! <br> Think together with AI. Use Shift+Enter to toggle generation"
22
+
23
+ class DynamicState:
24
+ """动态UI状态"""
25
+ def __init__(self):
26
+ self.should_stream = False
27
+ self.stream_completed = False
28
+ self.in_cot = True
29
+ self.current_language = "en"
30
+
31
+ def control_button_handler(self):
32
+ """切换流式传输状态"""
33
+ if self.should_stream:
34
+ self.should_stream = False
35
+ else:
36
+ self.stream_completed = False
37
+ self.should_stream = True
38
+ return self.ui_state_controller()
39
+
40
+ def ui_state_controller(self):
41
+ """生成动态UI组件状态"""
42
+ print("UPDATE UI!!")
43
+ # [control_button, status_indicator, thought_editor, reset_button]
44
+ lang_data = LANGUAGE_CONFIG[self.current_language]
45
+ control_value = lang_data["pause_btn"] if self.should_stream else lang_data["generate_btn"]
46
+ control_variant = "secondary" if self.should_stream else "primary"
47
+ status_value = lang_data["completed"] if self.stream_completed else lang_data["interrupted"]
48
+ return (
49
+ gr.update(
50
+ value=control_value,
51
+ variant=control_variant
52
+ ),
53
+ gr.update(
54
+ value=status_value,
55
+ ),
56
+ gr.update(),
57
+ gr.update(interactive = not self.should_stream)
58
+ )
59
+ def reset_workspace(self):
60
+ """重置工作区状态"""
61
+ self.stream_completed = False
62
+ self.should_stream = False
63
+ self.in_cot = True
64
+ return self.ui_state_controller() + ("", "", LANGUAGE_CONFIG["en"]["bot_default"])
65
+
66
+ class CoordinationManager:
67
+ """管理人类与AI的协同节奏"""
68
+ def __init__(self, paragraph_threshold, initial_content):
69
+ self.paragraph_threshold = paragraph_threshold
70
+ self.initial_paragraph_count = initial_content.count("\n\n")
71
+ self.triggered = False
72
+
73
+ def should_pause_for_human(self, current_content):
74
+ if self.paragraph_threshold <= 0 or self.triggered:
75
+ return False
76
+
77
+ current_paragraphs = current_content.count("\n\n")
78
+ if current_paragraphs - self.initial_paragraph_count >= self.paragraph_threshold:
79
+ self.triggered = True
80
+ return True
81
+ return False
82
+
83
+
84
+ class ConvoState:
85
+ """State of current ROUND of convo"""
86
+ def __init__(self):
87
+ self.throughput = AppConfig.DEFAULT_THROUGHPUT
88
+ self.sync_threshold = AppConfig.SYNC_THRESHOLD_DEFAULT
89
+ self.current_language = "en"
90
+ self.convo = []
91
+ self.initialize_new_round()
92
+
93
+ def initialize_new_round(self):
94
+ self.current = {}
95
+ self.current["user"] = ""
96
+ self.current["cot"] = ""
97
+ self.current["result"] = ""
98
+ self.convo.append(self.current)
99
+
100
+
101
+ def flatten_output(self):
102
+ output = []
103
+ for round in self.convo:
104
+ output.append({"role": "user", "content": round["user"]})
105
+ if len(round["cot"])>0:
106
+ output.append({"role": "assistant", "content": round["cot"], "metadata":{"title": f"Chain of Thought"}})
107
+ if len(round["result"])>0:
108
+ output.append({"role": "assistant", "content": round["result"]})
109
+ return output
110
+
111
+ def generate_ai_response(self, user_prompt, current_content, dynamic_state):
112
+ lang_data = LANGUAGE_CONFIG[self.current_language]
113
+ dynamic_state.stream_completed = False
114
+ full_response = current_content
115
+ api_client = OpenAI(
116
+ api_key=os.getenv("API_KEY"),
117
+ base_url=os.getenv("API_URL"),
118
+ timeout=AppConfig.API_TIMEOUT
119
+ )
120
+ coordinator = CoordinationManager(self.sync_threshold, current_content)
121
+
122
+ try:
123
+ messages = [
124
+ {"role": "user", "content": user_prompt},
125
+ {"role": "assistant", "content": f"<think>\n{current_content}", "prefix": True}
126
+ ]
127
+ self.current["user"] = user_prompt
128
+ response_stream = api_client.chat.completions.create(
129
+ model=os.getenv("API_MODEL"),
130
+ messages=messages,
131
+ stream=True,
132
+ timeout=AppConfig.API_TIMEOUT
133
+ )
134
+ for chunk in response_stream:
135
+ chunk_content = chunk.choices[0].delta.content
136
+ if coordinator.should_pause_for_human(full_response):
137
+ dynamic_state.should_stream = False
138
+ if not dynamic_state.should_stream:
139
+ break
140
+
141
+ if chunk_content:
142
+ full_response += chunk_content
143
+ # Update Convo State
144
+ think_complete = "</think>" in full_response
145
+ dynamic_state.in_cot = not think_complete
146
+ if think_complete:
147
+ self.current["cot"], self.current["result"] = full_response.split("</think>")
148
+ else:
149
+ self.current["cot"], self.current["result"] = (full_response, "")
150
+ status = lang_data["loading_thinking"] if dynamic_state.in_cot else lang_data["loading_output"]
151
+ yield full_response, status, self.flatten_output()
152
+
153
+ interval = 1.0 / self.throughput
154
+ start_time = time.time()
155
+ while (time.time() - start_time) < interval and dynamic_state.should_stream:
156
+ time.sleep(0.005)
157
+
158
+ except Exception as e:
159
+ error_msg = LANGUAGE_CONFIG[self.current_language].get("error", "Error")
160
+ full_response += f"\n\n[{error_msg}: {str(e)}]"
161
+ yield full_response, error_msg, status, self.flatten_output() + [{"role":"assistant","content": error_msg, "metadata":{"title": f"❌Error"}}]
162
+
163
+ finally:
164
+ dynamic_state.should_stream = False
165
+ if "status" not in locals():
166
+ status = "Whoops... ERROR"
167
+ if 'response_stream' in locals():
168
+ response_stream.close()
169
+ yield full_response, status, self.flatten_output()
170
+
171
+
172
+ def update_interface_language(selected_lang, convo_state, dynamic_state):
173
+ """更新界面语言配置"""
174
+ convo_state.current_language = selected_lang
175
+ dynamic_state.current_language = selected_lang
176
+ lang_data = LANGUAGE_CONFIG[selected_lang]
177
+ return [
178
+ gr.update(value=f"{lang_data['title']}"),
179
+ gr.update(label=lang_data["prompt_label"], placeholder=lang_data["prompt_placeholder"]),
180
+ gr.update(label=lang_data["editor_label"], placeholder=lang_data["editor_placeholder"]),
181
+ gr.update(label=lang_data["sync_threshold_label"], info=lang_data["sync_threshold_info"]),
182
+ gr.update(label=lang_data["throughput_label"], info=lang_data["throughput_info"]),
183
+ gr.update(
184
+ value=lang_data["pause_btn"] if dynamic_state.should_stream else lang_data["generate_btn"],
185
+ variant="secondary" if dynamic_state.should_stream else "primary"
186
+ ),
187
+ gr.update(label=lang_data["language_label"]),
188
+ gr.update(value=lang_data["clear_btn"], interactive = not dynamic_state.should_stream),
189
+ gr.update(value=lang_data["introduction"]),
190
+ gr.update(value=lang_data["bot_default"]),
191
+ ]
192
+
193
+
194
+
195
+ theme = gr.themes.Base(font="system-ui", primary_hue="stone")
196
+
197
+ with gr.Blocks(theme=theme, css_paths="styles.css") as demo:
198
+ convo_state = gr.State(ConvoState)
199
+ dynamic_state = gr.State(DynamicState) # DynamicState is now a separate state
200
+
201
+ with gr.Row(variant=""):
202
+ title_md = gr.Markdown(f"## {LANGUAGE_CONFIG['en']['title']}", container=False)
203
+ lang_selector = gr.Dropdown(
204
+ choices=["en", "zh"],
205
+ value="en",
206
+ elem_id="compact_lang_selector",
207
+ scale=0,
208
+ container=False
209
+ )
210
+
211
+ with gr.Row(equal_height=True):
212
+ # 对话面板
213
+ with gr.Column(scale=1, min_width=500):
214
+ chatbot = gr.Chatbot(type="messages", height=300,
215
+ value=LANGUAGE_CONFIG['en']['bot_default'],
216
+ group_consecutive_messages=False,
217
+ show_copy_all_button=True,
218
+ show_share_button=True,
219
+ )
220
+ prompt_input = gr.Textbox(
221
+ label=LANGUAGE_CONFIG["en"]["prompt_label"],
222
+ lines=2,
223
+ placeholder=LANGUAGE_CONFIG["en"]["prompt_placeholder"],
224
+ max_lines=5,
225
+ )
226
+ with gr.Row():
227
+ control_button = gr.Button(
228
+ value=LANGUAGE_CONFIG["en"]["generate_btn"],
229
+ variant="primary"
230
+ )
231
+ next_turn_btn = gr.Button(
232
+ value=LANGUAGE_CONFIG["en"]["clear_btn"],
233
+ interactive=True
234
+ )
235
+ status_indicator = gr.Markdown(AppConfig.LOADING_DEFAULT)
236
+ intro_md = gr.Markdown(LANGUAGE_CONFIG["en"]["introduction"], visible=False)
237
+
238
+ # 思考编辑面板
239
+ with gr.Column(scale=1, min_width=400):
240
+ thought_editor = gr.Textbox(
241
+ label=LANGUAGE_CONFIG["en"]["editor_label"],
242
+ lines=16,
243
+ placeholder=LANGUAGE_CONFIG["en"]["editor_placeholder"],
244
+ autofocus=True,
245
+ elem_id="editor"
246
+ )
247
+ with gr.Row():
248
+ sync_threshold_slider = gr.Slider(
249
+ minimum=0,
250
+ maximum=20,
251
+ value=AppConfig.SYNC_THRESHOLD_DEFAULT,
252
+ step=1,
253
+ label=LANGUAGE_CONFIG["en"]["sync_threshold_label"],
254
+ info=LANGUAGE_CONFIG["en"]["sync_threshold_info"]
255
+ )
256
+ throughput_control = gr.Slider(
257
+ minimum=1,
258
+ maximum=100,
259
+ value=AppConfig.DEFAULT_THROUGHPUT,
260
+ step=1,
261
+ label=LANGUAGE_CONFIG["en"]["throughput_label"],
262
+ info=LANGUAGE_CONFIG["en"]["throughput_info"]
263
+ )
264
+
265
+ # 交互逻辑
266
+
267
+ stateful_ui = (control_button, status_indicator, thought_editor, next_turn_btn)
268
+
269
+ throughput_control.change(
270
+ lambda val, s: setattr(s, "throughput", val),
271
+ [throughput_control, convo_state],
272
+ None,
273
+ queue=False
274
+ )
275
+
276
+ sync_threshold_slider.change(
277
+ lambda val, s: setattr(s, "sync_threshold", val),
278
+ [sync_threshold_slider, convo_state],
279
+ None,
280
+ queue=False
281
+ )
282
+
283
+ def wrap_stream_generator(convo_state, dynamic_state, prompt, content): # Pass dynamic_state here
284
+ for response in convo_state.generate_ai_response(prompt, content, dynamic_state): # Pass dynamic_state to generate_ai_response
285
+ yield response
286
+
287
+ gr.on( #主按钮trigger
288
+ [control_button.click, prompt_input.submit, thought_editor.submit],
289
+ lambda d: d.control_button_handler(), # Pass dynamic_state to control_button_handler
290
+ [dynamic_state],
291
+ stateful_ui,
292
+ show_progress=False
293
+ ).then( #生成事件
294
+ wrap_stream_generator, # Pass both states
295
+ [convo_state, dynamic_state, prompt_input, thought_editor],
296
+ [thought_editor, status_indicator, chatbot],
297
+ concurrency_limit=100
298
+ ).then( #生成终止后UI状态判断
299
+ lambda d: d.ui_state_controller(), # Pass dynamic_state to ui_state_controller
300
+ [dynamic_state],
301
+ stateful_ui,
302
+ show_progress=False,
303
+ )
304
+
305
+ next_turn_btn.click(
306
+ lambda d: d.reset_workspace(), # Pass dynamic_state to reset_workspace
307
+ [dynamic_state],
308
+ stateful_ui + (thought_editor, prompt_input, chatbot),
309
+ queue=False
310
+ )
311
+
312
+ lang_selector.change(
313
+ lambda lang, s, d: update_interface_language(lang, s, d), # Pass dynamic_state to update_interface_language
314
+ [lang_selector, convo_state, dynamic_state],
315
+ [title_md, prompt_input, thought_editor, sync_threshold_slider,
316
+ throughput_control, control_button, lang_selector, next_turn_btn, intro_md, chatbot],
317
+ queue=False
318
+ )
319
+
320
+ if __name__ == "__main__":
321
+ demo.queue(default_concurrency_limit=10000)
322
+ demo.launch()
lang.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # lang.py
2
+ LANGUAGE_CONFIG = {
3
+ "en": {
4
+ "title": "CoT-Lab: Human-AI Co-Thinking Laboratory \nFollow, learn, and iterate the thought within one turn.",
5
+ "prompt_label": "Task Description - Prompt",
6
+ "prompt_placeholder": "Enter your prompt here...",
7
+ "editor_label": "Thinking Process",
8
+ "editor_placeholder": "The AI's thinking process will appear here... You can edit when it pauses",
9
+ "generate_btn": "Generate",
10
+ "pause_btn": "Pause",
11
+ "sync_threshold_label": "🧠 Human Thinking Cadence",
12
+ "sync_threshold_info": "Pause for human turn per X paragraphs",
13
+ "throughput_label": "⏱ Sync Rate",
14
+ "throughput_info": "Tokens/s - 5:Learn, 10:Follow, 50:Skim",
15
+ "language_label": "Language",
16
+ "loading_thinking": "🤖 AI Thinking...<br>Shift+Enter to Pause",
17
+ "loading_output": "🖨️ Result Writing...<br>Shift+Enter to Pause",
18
+ "interrupted": "🤔 Pause, Human thinking time <br>Shift+Enter to hand over to AI",
19
+ "completed": "✅ Completed <br> Find formatted final result in results tab",
20
+ "error": "Error",
21
+ "api_config_label": "API Configuration",
22
+ "api_key_label": "API Key",
23
+ "api_key_placeholder": "Leave empty to use environment variable",
24
+ "api_url_label": "API URL",
25
+ "api_url_placeholder": "Leave empty for default URL",
26
+ "clear_btn": "Clear Thinking",
27
+ "introduction": "Think together with AI. Use `Shift+Enter` to toggle generation <br>You can modify the thinking process when AI pauses",
28
+ "bot_default": [
29
+ {"role":"assistant","content":"Welcome to our co-thinking space! Ready to synchronize our cognitive rhythms? \n Shall we start by adjusting the throughput slider to match your reading pace? \n Enter your task below, edit my thinking process when I pause, and let's begin weaving thoughts together →"},
30
+ ]
31
+ },
32
+ "zh": {
33
+ "title": "CoT-Lab: 人机协同思维实验室\n在一轮对话中跟随、学习、迭代思维链",
34
+ "prompt_label": "任务描述 - 提示词",
35
+ "prompt_placeholder": "在此输入您的问题...",
36
+ "editor_label": "思考过程",
37
+ "editor_placeholder": "AI的思考过程将在此显示...您可以在暂停的时候编辑",
38
+ "generate_btn": "生成",
39
+ "pause_btn": "暂停",
40
+ "sync_threshold_label": "🧠 人类思考间隔",
41
+ "sync_threshold_info": "段落数 - 每X段落自动暂停进入人类思考时间",
42
+ "throughput_label": "⏱ 同步思考速度",
43
+ "throughput_info": "词元/秒 - 5:学习, 10:跟读, 50:跳读",
44
+ "language_label": "界面语言",
45
+ "loading_thinking": "🤖 AI思考中... <br>Shift+Enter可暂停",
46
+ "loading_output": "🖨️ 结果输出中... <br>Shift+Enter可暂停",
47
+ "interrupted": "🤔 暂停,人类思考回合<br>Shift+Enter交回给AI",
48
+ "completed": "✅ 已完成 <br> 可在结果标签页中查看渲染了格式的最终结果",
49
+ "error": "错误",
50
+ "api_config_label": "API配置",
51
+ "api_key_label": "API密钥",
52
+ "api_key_placeholder": "留空使用环境变量",
53
+ "api_url_label": "API地址",
54
+ "api_url_placeholder": "留空使用默认地址",
55
+ "clear_btn": "清空思考",
56
+ "introduction": "和AI一起思考,Shift+Enter切换生成状态<br>AI暂停的时候你可以编辑思维过程",
57
+ "bot_default": [
58
+ {"role":"assistant","content":"欢迎来到协同思考空间!准备好同步我们的认知节奏了吗?\n 建议先调整右侧的'同步思考速度'滑块,让它匹配你的阅读速度 \n 在下方输入任务描述,在我暂停时修改我的思维,让我们开始编织思维链条 →"},
59
+ ]
60
+ }
61
+ }
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ openai
2
+ python-dotenv
3
+ gradio
styles.css ADDED
@@ -0,0 +1 @@
 
 
1
+ footer{display:none !important}