freddyaboulton HF staff commited on
Commit
6f357a7
·
verified ·
1 Parent(s): 6d91c92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +318 -1
README.md CHANGED
@@ -7,4 +7,321 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ <div style='text-align: center; margin-bottom: 1rem; display: flex; justify-content: center; align-items: center;'>
11
+ <h1 style='color: white; margin: 0; font-size: 40px'>FastRTC</h1>
12
+ <img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/fastrtc_logo.png"
13
+ onerror="this.onerror=null; this.src='https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/fastrtc_logo.png';"
14
+ alt="FastRTC Logo"
15
+ style="height: 100px; margin-right: 10px;">
16
+ </div>
17
+
18
+ <div style="display: flex; flex-direction: row; justify-content: center">
19
+ <img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/fastrtc">
20
+ <a href="https://github.com/freddyaboulton/fastrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
21
+ </div>
22
+
23
+ <h3 style='text-align: center'>
24
+ The Real-Time Communication Library for Python.
25
+ </h3>
26
+
27
+ Turn any python function into a real-time audio and video stream over WebRTC or WebSockets.
28
+
29
+ ## Installation
30
+
31
+ ```bash
32
+ pip install fastrtc
33
+ ```
34
+
35
+ to use built-in pause detection (see [ReplyOnPause](https://fastrtc.org/)), and text to speech (see [Text To Speech](https://fastrtc.org/userguide/audio/#text-to-speech)), install the `vad` and `tts` extras:
36
+
37
+ ```bash
38
+ pip install fastrtc[vad, tts]
39
+ ```
40
+
41
+ ## Key Features
42
+
43
+ - 🗣️ Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user.
44
+ - 💻 Automatic UI - Use the `.ui.launch()` method to launch the webRTC-enabled built-in Gradio UI.
45
+ - 🔌 Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!
46
+ - ⚡️ Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!
47
+ - 📞 Automatic Telephone Support - Use the `fastphone()` method of the stream to launch the application and get a free temporary phone number!
48
+ - 🤖 Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example on how to serve a custom JS frontend.
49
+
50
+ ## Docs
51
+
52
+ [https://fastrtc.org](https://fastrtc.org)
53
+
54
+ ## Examples
55
+ See the [Cookbook](https://fastrtc.org/pr-preview/pr-60/cookbook/) for examples of how to use the library.
56
+
57
+ <table>
58
+ <tr>
59
+ <td width="50%">
60
+ <h3>🗣️👀 Gemini Audio Video Chat</h3>
61
+ <p>Stream BOTH your webcam video and audio feeds to Google Gemini. You can also upload images to augment your conversation!</p>
62
+ <video width="100%" src="https://github.com/user-attachments/assets/9636dc97-4fee-46bb-abb8-b92e69c08c71" controls></video>
63
+ <p>
64
+ <a href="https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat">Demo</a> |
65
+ <a href="https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat/blob/main/app.py">Code</a>
66
+ </p>
67
+ </td>
68
+ <td width="50%">
69
+ <h3>🗣️ Google Gemini Real Time Voice API</h3>
70
+ <p>Talk to Gemini in real time using Google's voice API.</p>
71
+ <video width="100%" src="https://github.com/user-attachments/assets/ea6d18cb-8589-422b-9bba-56332d9f61de" controls></video>
72
+ <p>
73
+ <a href="https://huggingface.co/spaces/fastrtc/talk-to-gemini">Demo</a> |
74
+ <a href="https://huggingface.co/spaces/fastrtc/talk-to-gemini/blob/main/app.py">Code</a>
75
+ </p>
76
+ </td>
77
+ </tr>
78
+
79
+ <tr>
80
+ <td width="50%">
81
+ <h3>🗣️ OpenAI Real Time Voice API</h3>
82
+ <p>Talk to ChatGPT in real time using OpenAI's voice API.</p>
83
+ <video width="100%" src="https://github.com/user-attachments/assets/178bdadc-f17b-461a-8d26-e915c632ff80" controls></video>
84
+ <p>
85
+ <a href="https://huggingface.co/spaces/fastrtc/talk-to-openai">Demo</a> |
86
+ <a href="https://huggingface.co/spaces/fastrtc/talk-to-openai/blob/main/app.py">Code</a>
87
+ </p>
88
+ </td>
89
+ <td width="50%">
90
+ <h3>🤖 Hello Computer</h3>
91
+ <p>Say computer before asking your question!</p>
92
+ <video width="100%" src="https://github.com/user-attachments/assets/afb2a3ef-c1ab-4cfb-872d-578f895a10d5" controls></video>
93
+ <p>
94
+ <a href="https://huggingface.co/spaces/fastrtc/hello-computer">Demo</a> |
95
+ <a href="https://huggingface.co/spaces/fastrtc/hello-computer/blob/main/app.py">Code</a>
96
+ </p>
97
+ </td>
98
+ </tr>
99
+
100
+ <tr>
101
+ <td width="50%">
102
+ <h3>🤖 Llama Code Editor</h3>
103
+ <p>Create and edit HTML pages with just your voice! Powered by SambaNova systems.</p>
104
+ <video width="100%" src="https://github.com/user-attachments/assets/98523cf3-dac8-4127-9649-d91a997e3ef5" controls></video>
105
+ <p>
106
+ <a href="https://huggingface.co/spaces/fastrtc/llama-code-editor">Demo</a> |
107
+ <a href="https://huggingface.co/spaces/fastrtc/llama-code-editor/blob/main/app.py">Code</a>
108
+ </p>
109
+ </td>
110
+ <td width="50%">
111
+ <h3>🗣️ Talk to Claude</h3>
112
+ <p>Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.</p>
113
+ <video width="100%" src="https://github.com/user-attachments/assets/fb6ef07f-3ccd-444a-997b-9bc9bdc035d3" controls></video>
114
+ <p>
115
+ <a href="https://huggingface.co/spaces/fastrtc/talk-to-claude">Demo</a> |
116
+ <a href="https://huggingface.co/spaces/fastrtc/talk-to-claude/blob/main/app.py">Code</a>
117
+ </p>
118
+ </td>
119
+ </tr>
120
+
121
+ <tr>
122
+ <td width="50%">
123
+ <h3>🎵 Whisper Transcription</h3>
124
+ <p>Have whisper transcribe your speech in real time!</p>
125
+ <video width="100%" src="https://github.com/user-attachments/assets/87603053-acdc-4c8a-810f-f618c49caafb" controls></video>
126
+ <p>
127
+ <a href="https://huggingface.co/spaces/fastrtc/whisper-realtime">Demo</a> |
128
+ <a href="https://huggingface.co/spaces/fastrtc/whisper-realtime/blob/main/app.py">Code</a>
129
+ </p>
130
+ </td>
131
+ <td width="50%">
132
+ <h3>📷 Yolov10 Object Detection</h3>
133
+ <p>Run the Yolov10 model on a user webcam stream in real time!</p>
134
+ <video width="100%" src="https://github.com/user-attachments/assets/f82feb74-a071-4e81-9110-a01989447ceb" controls></video>
135
+ <p>
136
+ <a href="https://huggingface.co/spaces/fastrtc/object-detection">Demo</a> |
137
+ <a href="https://huggingface.co/spaces/fastrtc/object-detection/blob/main/app.py">Code</a>
138
+ </p>
139
+ </td>
140
+ </tr>
141
+
142
+ <tr>
143
+ <td width="50%">
144
+ <h3>🗣️ Kyutai Moshi</h3>
145
+ <p>Kyutai's moshi is a novel speech-to-speech model for modeling human conversations.</p>
146
+ <video width="100%" src="https://github.com/user-attachments/assets/becc7a13-9e89-4a19-9df2-5fb1467a0137" controls></video>
147
+ <p>
148
+ <a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi">Demo</a> |
149
+ <a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi/blob/main/app.py">Code</a>
150
+ </p>
151
+ </td>
152
+ <td width="50%">
153
+ <h3>🗣️ Hello Llama: Stop Word Detection</h3>
154
+ <p>A code editor built with Llama 3.3 70b that is triggered by the phrase "Hello Llama". Build a Siri-like coding assistant in 100 lines of code!</p>
155
+ <video width="100%" src="https://github.com/user-attachments/assets/3e10cb15-ff1b-4b17-b141-ff0ad852e613" controls></video>
156
+ <p>
157
+ <a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor">Demo</a> |
158
+ <a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor/blob/main/app.py">Code</a>
159
+ </p>
160
+ </td>
161
+ </tr>
162
+ </table>
163
+
164
+ ## Usage
165
+
166
+ This is an shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
167
+
168
+ - `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/).
169
+ - `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.
170
+ - `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system.
171
+
172
+
173
+ ## Quickstart
174
+
175
+ ### Echo Audio
176
+
177
+ ```python
178
+ from fastrtc import Stream, ReplyOnPause
179
+ import numpy as np
180
+
181
+ def echo(audio: tuple[int, np.ndarray]):
182
+ # The function will be passed the audio until the user pauses
183
+ # Implement any iterator that yields audio
184
+ # See "LLM Voice Chat" for a more complete example
185
+ yield audio
186
+
187
+ stream = Stream(
188
+ handler=ReplyOnPause(detection),
189
+ modality="audio",
190
+ mode="send-receive",
191
+ )
192
+ ```
193
+
194
+ ### LLM Voice Chat
195
+
196
+ ```py
197
+ from fastrtc import (
198
+ ReplyOnPause, AdditionalOutputs, Stream,
199
+ audio_to_bytes, aggregate_bytes_to_16bit
200
+ )
201
+ import gradio as gr
202
+ from groq import Groq
203
+ import anthropic
204
+ from elevenlabs import ElevenLabs
205
+
206
+ groq_client = Groq()
207
+ claude_client = anthropic.Anthropic()
208
+ tts_client = ElevenLabs()
209
+
210
+
211
+ # See "Talk to Claude" in Cookbook for an example of how to keep
212
+ # track of the chat history.
213
+ def response(
214
+ audio: tuple[int, np.ndarray],
215
+ ):
216
+ prompt = groq_client.audio.transcriptions.create(
217
+ file=("audio-file.mp3", audio_to_bytes(audio)),
218
+ model="whisper-large-v3-turbo",
219
+ response_format="verbose_json",
220
+ ).text
221
+ response = claude_client.messages.create(
222
+ model="claude-3-5-haiku-20241022",
223
+ max_tokens=512,
224
+ messages=[{"role": "user", "content": prompt}],
225
+ )
226
+ response_text = " ".join(
227
+ block.text
228
+ for block in response.content
229
+ if getattr(block, "type", None) == "text"
230
+ )
231
+ iterator = tts_client.text_to_speech.convert_as_stream(
232
+ text=response_text,
233
+ voice_id="JBFqnCBsd6RMkjVDRZzb",
234
+ model_id="eleven_multilingual_v2",
235
+ output_format="pcm_24000"
236
+
237
+ )
238
+ for chunk in aggregate_bytes_to_16bit(iterator):
239
+ audio_array = np.frombuffer(chunk, dtype=np.int16).reshape(1, -1)
240
+ yield (24000, audio_array)
241
+
242
+ stream = Stream(
243
+ modality="audio",
244
+ mode="send-receive",
245
+ handler=ReplyOnPause(response),
246
+ )
247
+ ```
248
+
249
+ ### Webcam Stream
250
+
251
+ ```python
252
+ from fastrtc import Stream
253
+ import numpy as np
254
+
255
+
256
+ def flip_vertically(image):
257
+ return np.flip(image, axis=0)
258
+
259
+
260
+ stream = Stream(
261
+ handler=flip_vertically,
262
+ modality="video",
263
+ mode="send-receive",
264
+ )
265
+ ```
266
+
267
+ ### Object Detection
268
+
269
+ ```python
270
+ from fastrtc import Stream
271
+ import gradio as gr
272
+ import cv2
273
+ from huggingface_hub import hf_hub_download
274
+ from .inference import YOLOv10
275
+
276
+ model_file = hf_hub_download(
277
+ repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
278
+ )
279
+
280
+ # git clone https://huggingface.co/spaces/fastrtc/object-detection
281
+ # for YOLOv10 implementation
282
+ model = YOLOv10(model_file)
283
+
284
+ def detection(image, conf_threshold=0.3):
285
+ image = cv2.resize(image, (model.input_width, model.input_height))
286
+ new_image = model.detect_objects(image, conf_threshold)
287
+ return cv2.resize(new_image, (500, 500))
288
+
289
+ stream = Stream(
290
+ handler=detection,
291
+ modality="video",
292
+ mode="send-receive",
293
+ additional_inputs=[
294
+ gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
295
+ ]
296
+ )
297
+ ```
298
+
299
+ ## Running the Stream
300
+
301
+ Run:
302
+
303
+ ### Gradio
304
+
305
+ ```py
306
+ stream.ui.launch()
307
+ ```
308
+
309
+ ### Telephone (Audio Only)
310
+
311
+ ```py
312
+ stream.fastphone()
313
+ ```
314
+
315
+ ### FastAPI
316
+
317
+ ```py
318
+ app = FastAPI()
319
+ stream.mount(app)
320
+
321
+ # Optional: Add routes
322
+ @app.get("/")
323
+ async def _():
324
+ return HTMLResponse(content=open("index.html").read())
325
+
326
+ # uvicorn app:app --host 0.0.0.0 --port 8000
327
+ ```