File size: 2,144 Bytes
065d164
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# HandSpew

HandSpew is a simple web application that uses MediaPipe for hand landmark detection and Gemini 2.0 Flash for generating thoughts based on hand gestures. When you open your hand like a puppet mouth (thumb not touching other fingers), the app generates a thought related to what the camera sees.

## Features

- Real-time hand landmark detection using MediaPipe
- Thought generation using Gemini 2.0 Flash
- Simple and intuitive UI
- Responsive design

## Getting Started

### Prerequisites

- Node.js 18.x or higher
- A Gemini API key from [Google AI Studio](https://ai.google.dev/)

### Installation

1. Clone the repository:

```bash
git clone https://github.com/yourusername/handspew.git
cd handspew
```

2. Install dependencies:

```bash
npm install
```

3. Create a `.env.local` file in the root directory and add your Gemini API key:

```
GEMINI_API_KEY=your_gemini_api_key_here
```

4. Start the development server:

```bash
npm run dev
```

5. Open [http://localhost:3000](http://localhost:3000) in your browser.

## How to Use

1. Allow camera access when prompted
2. Position your hand in front of the camera
3. Open and close your hand like a puppet mouth:
   - When your thumb is touching another finger (closed mouth), no thoughts are generated
   - When your thumb is not touching any finger (open mouth), a thought is generated based on what the camera sees

## Deployment

### Deploying to Hugging Face Spaces

1. Create a new Space on Hugging Face
2. Connect your GitHub repository
3. Add your Gemini API key as a secret in the Space settings
4. Deploy the app

## Technologies Used

- [Next.js](https://nextjs.org/) - React framework
- [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) - Hand landmark detection
- [Gemini 2.0 Flash](https://ai.google.dev/gemini-api/docs/vision) - Vision-based thought generation
- [Tailwind CSS](https://tailwindcss.com/) - Styling

## License

This project is licensed under the MIT License - see the LICENSE file for details.

## Acknowledgments

- Google for providing the MediaPipe and Gemini APIs
- The Next.js team for the amazing framework