prithivMLmods commited on
Commit
248f7de
·
verified ·
1 Parent(s): cf5921a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - google/siglip2-base-patch16-224
7
+ pipeline_tag: image-classification
8
+ library_name: transformers
9
+ tags:
10
+ - deepfake
11
+ - Real
12
+ ---
13
+
14
+ # **Fake-Real-Class-Siglip2**
15
+ **Fake-Real-Class-Siglip2** is an image classification vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for a single-label classification task. It is designed to **classify images as either Fake or Real** using the **SiglipForImageClassification** architecture.
16
+
17
+ The model categorizes images into two classes:
18
+ - **Class 0:** "Fake" – The image is detected as AI-generated, manipulated, or synthetic.
19
+ - **Class 1:** "Real" – The image is classified as authentic and unaltered.
20
+
21
+ ```python
22
+ !pip install -q transformers torch pillow gradio
23
+ ```
24
+
25
+ ```python
26
+ import gradio as gr
27
+ from transformers import AutoImageProcessor
28
+ from transformers import SiglipForImageClassification
29
+ from transformers.image_utils import load_image
30
+ from PIL import Image
31
+ import torch
32
+
33
+ # Load model and processor
34
+ model_name = "prithivMLmods/Fake-Real-Class-Siglip2"
35
+ model = SiglipForImageClassification.from_pretrained(model_name)
36
+ processor = AutoImageProcessor.from_pretrained(model_name)
37
+
38
+ def classify_image(image):
39
+ """Classifies an image as Fake or Real."""
40
+ image = Image.fromarray(image).convert("RGB")
41
+ inputs = processor(images=image, return_tensors="pt")
42
+
43
+ with torch.no_grad():
44
+ outputs = model(**inputs)
45
+ logits = outputs.logits
46
+ probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
47
+
48
+ labels = model.config.id2label
49
+ predictions = {labels[i]: round(probs[i], 3) for i in range(len(probs))}
50
+
51
+ return predictions
52
+
53
+ # Create Gradio interface
54
+ iface = gr.Interface(
55
+ fn=classify_image,
56
+ inputs=gr.Image(type="numpy"),
57
+ outputs=gr.Label(label="Classification Result"),
58
+ title="Fake vs Real Image Classification",
59
+ description="Upload an image to determine if it is Fake or Real."
60
+ )
61
+
62
+ # Launch the app
63
+ if __name__ == "__main__":
64
+ iface.launch()
65
+ ```
66
+
67
+ # **Intended Use:**
68
+
69
+ The **Fake-Real-Class-Siglip2** model is designed to classify images into two categories: **Fake or Real**. It helps in detecting AI-generated or manipulated images.
70
+
71
+ ### Potential Use Cases:
72
+ - **Fake Image Detection:** Identifying AI-generated or altered images.
73
+ - **Content Verification:** Assisting platforms in filtering misleading media.
74
+ - **Forensic Analysis:** Supporting research in detecting synthetic media.
75
+ - **Authenticity Checks:** Helping journalists and investigators verify image credibility.