Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
9
1
18
Tech Meld
Tech-Meld
Follow
sensui3's profile picture
Mi6paulino's profile picture
NikolayKozloff's profile picture
6 followers
·
7 following
Tech_Meld_
Tech-ware
AI & ML interests
I love ML, i have been learning it since summer of 2023, a lot of things have happened since, from basic knowledge to creation of self-made language models and diffusion models, everything is still in early stages, but work is still done.
Recent Activity
replied
to
mitkox
's
post
about 20 hours ago
llama.cpp is 26.8% faster than ollama. I have upgraded both, and using the same settings, I am running the same DeepSeek R1 Distill 1.5B on the same hardware. It's an Apples to Apples comparison. Total duration: llama.cpp 6.85 sec <- 26.8% faster ollama 8.69 sec Breakdown by phase: Model loading llama.cpp 241 ms <- 2x faster ollama 553 ms Prompt processing llama.cpp 416.04 tokens/s with an eval time 45.67 ms <- 10x faster ollama 42.17 tokens/s with an eval time of 498 ms Token generation llama.cpp 137.79 tokens/s with an eval time 6.62 sec <- 13% faster ollama 122.07 tokens/s with an eval time 7.64 sec llama.cpp is LLM inference in C/C++; ollama adds abstraction layers and marketing. Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
new
activity
2 days ago
HuggingFaceTB/SmolVLM-256M-Instruct:
There is an issue with AutoProcessor
replied
to
mitkox
's
post
2 days ago
llama.cpp is 26.8% faster than ollama. I have upgraded both, and using the same settings, I am running the same DeepSeek R1 Distill 1.5B on the same hardware. It's an Apples to Apples comparison. Total duration: llama.cpp 6.85 sec <- 26.8% faster ollama 8.69 sec Breakdown by phase: Model loading llama.cpp 241 ms <- 2x faster ollama 553 ms Prompt processing llama.cpp 416.04 tokens/s with an eval time 45.67 ms <- 10x faster ollama 42.17 tokens/s with an eval time of 498 ms Token generation llama.cpp 137.79 tokens/s with an eval time 6.62 sec <- 13% faster ollama 122.07 tokens/s with an eval time 7.64 sec llama.cpp is LLM inference in C/C++; ollama adds abstraction layers and marketing. Make sure you own your AI. AI in the cloud is not aligned with you; it's aligned with the company that owns it.
View all activity
Organizations
None yet
spaces
12
Sort: Recently updated
pinned
Sleeping
🌍
Merging Diffusers
Merge your base model with your LoRA
pinned
Running
9
👀
Img2Prompt For SD3
Image to Prompt using fientuned PaliGemma for SD3
pinned
Sleeping
1
💬
Corenet Chat-8K
Runtime error
📊
SuperFast SDXL
Runtime error
🖼
Automated Stable Diffusion 3 Comparison
Runtime error
🏢
Hajax MultiModal
Expand 12 spaces
models
22
Sort: Recently updated
Tech-Meld/pixel-perfect-pony-sdxl
Text-to-Image
•
Updated
Sep 2, 2024
•
116
•
•
1
Tech-Meld/her-eyes
Text-to-Image
•
Updated
Aug 17, 2024
•
6
•
•
1
Tech-Meld/Life-XL_V1
Text-to-Image
•
Updated
Aug 11, 2024
•
1
Tech-Meld/life-fx
Text-to-Image
•
Updated
Aug 11, 2024
•
8
•
•
1
Tech-Meld/Smegmma-9B-v1-Q4_K_M-GGUF
Updated
Jul 21, 2024
•
2
Tech-Meld/mathstral-7B-v0.1-Q4_K_M-GGUF
Updated
Jul 21, 2024
•
1
Tech-Meld/the-bmw-m3-gtr
Text-to-Image
•
Updated
Jul 21, 2024
•
4
•
•
1
Tech-Meld/NuminaMath-7B-TIR-Q4_K_M-GGUF
Text Generation
•
Updated
Jul 14, 2024
•
1
Tech-Meld/Llama3-ChatQA-1.5-8B-Q4_K_S-GGUF
Text Generation
•
Updated
Jun 30, 2024
•
2
•
1
Tech-Meld/llm-compiler-7b-Q4_K_M-GGUF
Updated
Jun 28, 2024
•
1
Expand 22 models
datasets
None public yet