author_id
stringclasses 3
values | created_at
unknown | text
stringlengths 23
357
| retweet_count
int64 0
147
| reply_count
int64 0
34
| like_count
int64 0
1.15k
| quote_count
int64 0
14
| bookmark_count
int64 0
672
| impression_count
int64 0
107k
|
---|---|---|---|---|---|---|---|---|
1473756922117513227 | "2025-02-27T01:47:46" | 🔥 Thrilled to share that [3/4] of our submissions to #CVPR2025 were accepted—particularly exciting given this year’s ~22% acceptance rate out of 13K submissions. Here are some new directions we explored, mainly about generative AI, trustworthiness, robustness, and multimodal… https://t.co/ppq9lTTQOc https://t.co/aAMYs3xwoG | 5 | 4 | 62 | 0 | 10 | 5,394 |
1473756922117513227 | "2025-02-27T00:05:25" | @ICCVConference It's a very timely post | 0 | 1 | 15 | 0 | 0 | 1,014 |
1473756922117513227 | "2025-02-26T23:44:30" | @CVPR Thanks for the explanation; it seems like a "natural selection" and really depends on Reviewers-ACs. | 0 | 1 | 5 | 0 | 0 | 1,534 |
1473756922117513227 | "2025-02-26T23:38:26" | @CVPR This year’s acceptance rate has reached a historic low. I’ve noticed that other major AI venues like ICLR/NeurIPS have been expanding to include more accepted papers. Do you have any thoughts on why PCs/SACs might choose not to adjust the acceptance rate if they could do so? | 0 | 1 | 7 | 0 | 0 | 2,814 |
1473756922117513227 | "2025-02-26T19:54:12" | Thanks for the hard work of PCs, SACs, and ACs! https://t.co/2wKT56BEG8 https://t.co/48prM2tyUM | 0 | 0 | 17 | 0 | 0 | 2,073 |
1473756922117513227 | "2025-02-26T19:20:01" | @CVPR results out:
https://t.co/cFyEbrhZ5B https://t.co/pzEBRvpzsO | 2 | 1 | 9 | 0 | 0 | 3,700 |
1473756922117513227 | "2025-02-26T18:56:13" | @BKShalon @CVPR I waited till early morning (late midnight), then went to sleep. When I woke up again, still no results! 🤓 | 0 | 1 | 0 | 0 | 0 | 1,267 |
1473756922117513227 | "2025-02-26T17:55:08" | Still waiting, huh? https://t.co/gxbpeFYO3i https://t.co/te3SdlxUCh | 0 | 1 | 9 | 0 | 0 | 1,455 |
1473756922117513227 | "2025-02-26T17:27:29" | @abby621 @Kenneth97180053 @CVPR @Qi12Tom @aliathar94 very helpful!thanks! | 0 | 0 | 5 | 0 | 0 | 3,935 |
1473756922117513227 | "2025-02-26T17:27:13" | RT @abby621: @Kenneth97180053 @CVPR @Qi12Tom @aliathar94 @_vztu I think that there's a lot of confusion from folks who haven't participated… | 17 | 0 | 0 | 0 | 0 | 0 |
1473756922117513227 | "2025-02-26T06:13:59" | @leehomyc @CVPR Wow, that's a good advertisement haha :) | 0 | 0 | 5 | 0 | 0 | 4,291 |
1473756922117513227 | "2025-02-26T05:50:52" | 😟Hey friends, star/reply to this tweet if you're also waiting for @CVPR decisions https://t.co/McLSuGUkzQ | 4 | 15 | 186 | 1 | 5 | 55,038 |
1473756922117513227 | "2025-02-25T17:49:55" | RT @_vztu: 🚨𝐀𝐥𝐢𝐠𝐧𝐢𝐧𝐠 𝐕𝐢𝐬𝐢𝐨𝐧-𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐌𝐨𝐝𝐞𝐥𝐬 𝐋𝐢𝐤𝐞 𝐍𝐞𝐯𝐞𝐫 𝐁𝐞𝐟𝐨𝐫𝐞, 𝐰𝐢𝐭𝐡 𝐑𝐞-𝐀𝐥𝐢𝐠𝐧!
I’m thrilled to introduce RE-ALIGN—our breakthrough framewor… | 22 | 0 | 0 | 0 | 0 | 0 |
1473756922117513227 | "2025-02-24T19:23:44" | 🚨𝐀𝐥𝐢𝐠𝐧𝐢𝐧𝐠 𝐕𝐢𝐬𝐢𝐨𝐧-𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐌𝐨𝐝𝐞𝐥𝐬 𝐋𝐢𝐤𝐞 𝐍𝐞𝐯𝐞𝐫 𝐁𝐞𝐟𝐨𝐫𝐞, 𝐰𝐢𝐭𝐡 𝐑𝐞-𝐀𝐥𝐢𝐠𝐧!
I’m thrilled to introduce RE-ALIGN—our breakthrough framework that transforms Vision-Language Models (VLMs) by mitigating hallucinations and ensuring… https://t.co/WiRt49O3gM https://t.co/N9BKi2Pz39 | 22 | 2 | 91 | 0 | 36 | 6,916 |
1473756922117513227 | "2025-02-24T05:40:57" | @xwang_lk what about alexa | 0 | 1 | 1 | 0 | 0 | 1,082 |
1473756922117513227 | "2025-02-21T07:00:41" | ▀▄▀▄▀𝐂𝐚𝐧 𝐖𝐞 𝐓𝐫𝐮𝐥𝐲 𝐓𝐫𝐮𝐬𝐭 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈?▄▀▄▀▄
𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗠𝗼𝗱𝗲𝗹𝘀 (GenFMs) are advancing at an unprecedented pace, but can we trust them in high-stakes applications? Excited to share our latest research, a… https://t.co/WyLcUwPhyC https://t.co/6gQJoVzJ7z | 9 | 1 | 37 | 1 | 10 | 2,828 |
1473756922117513227 | "2025-02-21T06:40:05" | @HowieH36226 @hengjinlp @mohitban47 @MLamparth @jieyuzhao11 @JieyuZhang20 @WeijiaShi2 @HuaxiuYaoML @hhsun1 @ysu_nlp @CaimingXiong @UnrollHelper | 0 | 1 | 0 | 0 | 0 | 93 |
1240355312 | "2025-02-28T02:56:07" | @soldni slow in token output or in getting to know who Luca Soldaini is? ;) | 0 | 0 | 1 | 0 | 0 | 91 |
1240355312 | "2025-02-28T02:50:38" | - Paper: https://t.co/GrXT5DvQcW
- Code: https://t.co/Xnwrr804Ng | 0 | 0 | 1 | 0 | 0 | 156 |
1240355312 | "2025-02-28T02:50:38" | Indexing cost https://t.co/RGs8Zw2jWx | 0 | 1 | 1 | 0 | 1 | 158 |
1240355312 | "2025-02-28T02:50:37" | Detailed QA performance comparison https://t.co/5Ukg7DfYNT | 0 | 1 | 1 | 0 | 0 | 30 |
1240355312 | "2025-02-28T02:50:36" | Sharing the work I'm most excited about lately! Meet HippoRAG 2, a drop-in replacement of your RAG solution.
There's lots of enthusiasm about Graph + RAG, like GraphRAG or our own HippoRAG. However, while these methods fare favorably compared with early embedding models like… https://t.co/fkl0hT8lIY https://t.co/8uIie7zFdm https://t.co/kG47c3mGS4 | 5 | 1 | 16 | 1 | 1 | 941 |
1240355312 | "2025-02-26T03:18:38" | - paper: https://t.co/I036ikZLtx
- website (code/demo/etc): https://t.co/oPZH9zIi3y | 0 | 0 | 5 | 0 | 2 | 657 |
1240355312 | "2025-02-26T03:01:46" | Sparse Autoencoders have proven super useful for interpreting and steering LLMs. Now SAEs have finally come to vision models like CLIP and DINO!
SAEs allow us to interpret and control vision models at a fine-grained concept level. Key findings:
1. SAEs can extract many crisp… https://t.co/lBO29r3Bwq https://t.co/rKic283mK5 | 11 | 2 | 48 | 0 | 26 | 5,983 |
1240355312 | "2025-02-26T02:23:39" | @dawnsongtweets @HannaHajishirzi @UW @OhioState Thanks for having me! It was fun and many great questions from the audience. | 0 | 0 | 1 | 0 | 0 | 131 |
1240355312 | "2025-02-26T02:23:01" | RT @dawnsongtweets: @HannaHajishirzi @UW 🙏 Huge thanks to @ysu_nlp @OhioState for the 3rd lecture On Reasoning, Memory, and Planning of Lan… | 4 | 0 | 0 | 0 | 0 | 0 |
1240355312 | "2025-02-26T02:20:24" | RT @samstevens6860: What's actually different between CLIP and DINOv2? CLIP knows what "Brazil" looks like: Rio's skyline, sidewalk pattern… | 51 | 0 | 0 | 0 | 0 | 0 |
1240355312 | "2025-02-21T18:09:26" | RT @HowieH36226: Toward Trustworthy Generative Foundation Models (GenFMs) 🚀
🎇After six months of hard work and thanks to the efforts of th… | 28 | 0 | 0 | 0 | 0 | 0 |
1240355312 | "2025-02-21T05:25:23" | @jkkummerfeld @tallinzen @yuvalmarton @VeredShwartz I reviewed for CVPR’25. The policy seems to be like every author on any submission enters a ‘may be selected for review’ pool. PCs will filter the pool by publication record and add reviewers. Each such author-reviewer would be assigned at most 3 papers. | 0 | 0 | 3 | 0 | 0 | 458 |
1141052916570214400 | "2025-02-27T22:21:52" | @calebfahlgren You should add Gemini 2.0 Flash | 0 | 0 | 2 | 0 | 0 | 114 |
1141052916570214400 | "2025-02-27T21:40:51" | Gemini 2.0 Pro catches it. https://t.co/AQz4xidovC https://t.co/f6Nf5hmgvp | 0 | 1 | 10 | 0 | 0 | 2,326 |
1141052916570214400 | "2025-02-27T21:20:25" | @dgrreen Wondering if they are not having better, newer data or are not seeing improvements on newer data or want to avoid AI generated data. | 0 | 1 | 2 | 0 | 0 | 334 |
1141052916570214400 | "2025-02-27T21:16:33" | The knowledge cutoff for GPT-4.5 is October 2023? | 2 | 9 | 34 | 1 | 2 | 5,562 |
1141052916570214400 | "2025-02-27T20:32:22" | Did you know @GoogleDeepMind Gemini 2.0 Flash is $0.1/$0.4 per million input/output token? Or you get 750 Million input tokens for $75.🔥 | 3 | 2 | 65 | 1 | 9 | 3,420 |
1141052916570214400 | "2025-02-27T20:27:06" | Holy. How big did @OpenAI go? $75/$150 per million input/output token. https://t.co/qDconiotxt | 2 | 4 | 24 | 0 | 0 | 2,878 |
1141052916570214400 | "2025-02-27T20:17:27" | @Kratius1 I am impressed by the results It looks like a solid improvement to GPT-4o. But i expected a longer stream, more cool demos. Multimodality, Agent demo, something... | 1 | 0 | 8 | 0 | 0 | 480 |
1141052916570214400 | "2025-02-27T20:14:18" | Thats it? Am I the only who is confused? | 1 | 11 | 71 | 0 | 1 | 6,025 |
1141052916570214400 | "2025-02-27T18:10:57" | RT @freddy_alfonso_: Watch as one Gemini teaches another how to make breakfast, entirely coded in Python with FastRTC.
Set up instructions… | 18 | 0 | 0 | 0 | 0 | 0 |
1141052916570214400 | "2025-02-27T17:56:35" | Demo: https://t.co/XIt4ndz9qw
Code: https://t.co/6c7DJJbVsW
Blog: https://t.co/5S5BYegXcF
https://t.co/XIt4ndz9qw | 0 | 1 | 11 | 0 | 21 | 1,006 |
1141052916570214400 | "2025-02-27T17:56:35" | Excited to share a new demo that combines @GoogleDeepMind Gemini 2.0 with @nextjs to extract structured outputs from PDFs through natural language. Based on the “From PDFs to Insights: Structured Outputs from PDFs with Gemini 2.0” blog post. 👀
TL;DR:
📄 Upload PDFs and preview… https://t.co/05O7wNeJgg https://t.co/4JP8bjA9zS | 12 | 7 | 109 | 0 | 95 | 6,833 |
1141052916570214400 | "2025-02-27T11:45:40" | @altryne They will use both and if not more. https://t.co/ccHW7OC5qD | 0 | 1 | 2 | 0 | 0 | 269 |
1141052916570214400 | "2025-02-27T06:35:58" | Models: https://t.co/U3m0SvNz8M
Paper: https://t.co/kc1GkDRPkR | 2 | 2 | 14 | 0 | 6 | 1,688 |
1141052916570214400 | "2025-02-27T06:35:58" | Phi-4 mini update! @MSFTResearch released Phi-4 mini Instruct (3.8B) and Phi-4 Multimodal Instruct (5.6B) with audio and image support by integrating modality-specific LoRAs while keeping the base language model entirely frozen.
Multimodal TL;DR:
🖼️ Understands text, images, and… https://t.co/Sihznaydkt https://t.co/iZUC3RcPyE | 27 | 1 | 129 | 1 | 63 | 7,832 |
1141052916570214400 | "2025-02-26T20:46:52" | RT @SullyOmarr: It's official
swapped out everything in @ottogrid_ai from claude 3.5 to gemini 2.0 flash
getting better results at 1/30 t… | 14 | 0 | 0 | 0 | 0 | 0 |
1141052916570214400 | "2025-02-26T20:45:35" | @SullyOmarr @ottogrid_ai Great to hear! let me know if we can be of any help as you keep scaling! | 0 | 0 | 3 | 0 | 0 | 467 |
1141052916570214400 | "2025-02-26T19:29:58" | RT @googleaidevs: A few quick updates from the PaliGemma 2 Mix announcement last week. 👇🧵
https://t.co/39cXxBgkIT | 30 | 0 | 0 | 0 | 0 | 0 |
1141052916570214400 | "2025-02-26T18:28:12" | https://t.co/vQkcYS1hyW | 0 | 0 | 3 | 1 | 0 | 1,141 |
1141052916570214400 | "2025-02-26T18:28:12" | Amazon wants to compete with @OpenAI ChatGPT and @GoogleDeepMind Gemini App 👀 @amazon just announced Alexa+ a complete refresh of Alexa, here is what we technically know so far:
🚀 Alexa+ will be powered by Amazon Nova and @AnthropicAI Claude
🔗 New “Tool” APIs for 10k+… https://t.co/YjGubscZp4 https://t.co/jsgG4xRVlf | 10 | 2 | 66 | 2 | 23 | 4,706 |
1141052916570214400 | "2025-02-26T16:50:00" | Gemini Demo (fork and add your api key): https://t.co/1C0hJXrhRe
Docs: https://t.co/R061c10xVs | 1 | 0 | 14 | 0 | 18 | 5,684 |
1141052916570214400 | "2025-02-26T16:49:59" | Want to build Real-time Apps with @GoogleDeepMind Gemini 2.0 Flash? FastRTC lets you build Python based real-time apps using Gradio-UI. 🔥
🔄 Transforms Python functions into bidirectional audio/video streams with minimal code
🗣️ Built-in voice detection and automatic… https://t.co/zUO1WA1JMj https://t.co/o835htr0hl | 15 | 2 | 80 | 4 | 65 | 19,119 |
1141052916570214400 | "2025-02-26T10:40:52" | LLM pricing rush hours? 👀 https://t.co/7ycZeAi9JJ | 10 | 7 | 97 | 2 | 18 | 8,209 |
1141052916570214400 | "2025-02-26T09:32:03" | @jocarrasqueira @onyekaugo @googleaidevs @GitHubCopilot @patloeber @oneyekaugo you can get started without an GCP account using a regular Google, similar to AI Studio. https://t.co/tHNpvaSFVm | 0 | 2 | 2 | 0 | 0 | 51 |
1141052916570214400 | "2025-02-26T09:27:09" | Paper: https://t.co/EkyainmCj2
Blog: https://t.co/O0uR9pwlN0 | 4 | 0 | 27 | 0 | 16 | 2,124 |
1141052916570214400 | "2025-02-26T09:27:09" | SWE-RL from @AIatMeta is a implementation using Reinforcement Learning (GRPO) combined with data evolution and rule-based rewards to solve real-world software issues and fix bugs. SWE-RL achieves state-of-the-art performance among medium-sized models.
Implementation
1️⃣ Collect… https://t.co/NHt2hFLAUn https://t.co/8E32bYKa2l | 56 | 4 | 332 | 6 | 230 | 55,482 |
1141052916570214400 | "2025-02-26T08:03:04" | We all know, naming AI models is not easy. But i like this one. 🔦💡 https://t.co/1oyYZzL4Md | 1 | 2 | 20 | 0 | 1 | 2,330 |
1141052916570214400 | "2025-02-26T07:46:11" | Forking Linux will be the new forking Chrome. https://t.co/DLVXBgIqxM | 0 | 0 | 11 | 0 | 2 | 2,647 |
1141052916570214400 | "2025-02-25T21:13:07" | @casper_hansen_ @TheXeophon I don’t know about the money part. But Veo 2 is coming to YouTube https://t.co/sZf5Xb1Mpe | 0 | 0 | 2 | 0 | 0 | 64 |
1141052916570214400 | "2025-02-25T19:33:01" | @TheXeophon Updated now! We keep working on making sure the experience is great everywhere. | 0 | 1 | 3 | 0 | 0 | 167 |
1141052916570214400 | "2025-02-25T19:31:23" | @HrishbhDalal @GoogleDeepMind 🔦 | 0 | 1 | 1 | 0 | 1 | 92 |
1141052916570214400 | "2025-02-25T18:04:29" | Try it: https://t.co/Hn5c3VzfQh | 0 | 1 | 2 | 0 | 1 | 969 |
1141052916570214400 | "2025-02-25T18:04:29" | Model Update! @GoogleDeepMind Gemini 2.0 Flash-Lite is now generally available for production use! Model ID: `gemini-2.0-flash-lite`
💰Free-Tier with 1500 req/day then $0.075/$0.3 per 1M input/output token.
⚡Outperforms Gemini 1.5 Flash across benchmarks.
📏 Supports 1 million… https://t.co/Z4JkdnZAvR https://t.co/rvxsQaNIXo | 16 | 4 | 113 | 0 | 25 | 5,670 |
1141052916570214400 | "2025-02-25T17:03:11" | Currently live: https://t.co/zZu2ZXsWf1 https://t.co/0Z5ChkAxfB https://t.co/XGhlfZanvZ | 1 | 1 | 31 | 0 | 3 | 3,067 |
1141052916570214400 | "2025-02-25T14:07:32" | Start for free: https://t.co/3TgjZvIG4C https://t.co/Z9GRBhLXod | 3 | 1 | 33 | 0 | 9 | 3,446 |
1141052916570214400 | "2025-02-25T13:49:42" | RT @Thom_Wolf: Let me add a bit context to the latest DeepSeek code release as I feel it was a bit bare bones.
Mixture-of-Experts (MoE) is… | 75 | 0 | 0 | 0 | 0 | 0 |
1141052916570214400 | "2025-02-25T08:40:54" | Do you remember "Twitch play's Pokemon?". It took the chat "02d 11h 29m" to "Defeated Lt. Surge", thats how far Claude 3.7 got with ~30-35k actions.
Now, I am waiting for the first "AI plays Pokemon" stream.👀 | 1 | 0 | 7 | 1 | 0 | 4,898 |
1141052916570214400 | "2025-02-25T08:20:58" | Yesterday @AnthropicAI released Claude 3.7 with a focus on Coding. Here is a TL:DR; 🧵
> Excels at coding tasks esp. JS/TS and Python, many good examples and vibes on social media; State-of-the-art on SWE-bench verified (62.3%/70.2%)
> Highest score on the Aider Polyglot… https://t.co/bwGNPEsrid https://t.co/IvVUX8B7FH | 4 | 2 | 35 | 1 | 6 | 3,442 |
1141052916570214400 | "2025-02-25T07:55:48" | @Teknium1 A experimental version. You can try it here: https://t.co/xRSLQHYDgy
Would love to get your thoughts and feedback. https://t.co/a3FzBa1M3F | 0 | 0 | 8 | 0 | 1 | 549 |
1141052916570214400 | "2025-02-24T23:42:10" | Free Claude Stickers 😅 https://t.co/rhe1AF8ZAA https://t.co/Rm4lIuV7u5 | 2 | 3 | 18 | 0 | 1 | 4,013 |
1141052916570214400 | "2025-02-24T23:06:26" | Easter Egg found? 🥚
> This tool should be used whenever a user expresses interest in receiving Anthropic or Claude stickers, swag, or merchandise. When triggered, it will display a shipping form for the user to enter their mailing address and contact details. Once submitted,… https://t.co/vSsq8orjuv https://t.co/gcZfpOBd3g | 0 | 1 | 11 | 1 | 6 | 6,001 |
1141052916570214400 | "2025-02-24T23:04:22" | https://t.co/ZqTed9mIrb | 0 | 0 | 2 | 0 | 0 | 1,110 |
1141052916570214400 | "2025-02-24T23:04:22" | If you want to see what prompts "Claude Code" uses you can take a look at cjs file on npm ⬇️ https://t.co/BEEAY0buvL | 2 | 2 | 14 | 0 | 13 | 3,125 |
1141052916570214400 | "2025-02-24T22:50:09" | RT @yacineMTB: anthropic looked at 98% of their tokens being generated being code only tokens and then said "hey maybe we should focus on m… | 147 | 0 | 0 | 0 | 0 | 0 |
1141052916570214400 | "2025-02-24T22:37:11" | Reading good feedback and vibes on Claude. Good job 🙌🏻 But surprised the price stayed at $3/$15.
That’s 30x more expensive then Gemini 2.0 Flash and ~3x more then Open o3-mini. 👀 https://t.co/llTMbj2029 | 0 | 8 | 29 | 1 | 3 | 2,542 |
1141052916570214400 | "2025-02-24T17:45:42" | RT @notthatkush: switched to gemini 2 flash because of constant tweets by @OfficialLoganK on my feed. now, for more than half of my use cas… | 6 | 0 | 0 | 0 | 0 | 0 |
1141052916570214400 | "2025-02-24T16:43:19" | You can now branch conversations in AI Studio to new ones to try out different prompts with history and not lose track. https://t.co/EOb5RDjvUQ https://t.co/6d85eAzKKT | 1 | 1 | 37 | 0 | 6 | 3,950 |
1141052916570214400 | "2025-02-24T15:14:42" | @God_Official__ @ekdnam @TheXeophon @matvelloso @patloeber Thanks for flagging this. We hear you and are actively working on updating the snippets. I update here when it is updated. | 0 | 1 | 2 | 0 | 0 | 53 |
1141052916570214400 | "2025-02-24T13:44:13" | RT @vwxyzjn: https://t.co/8JLFbU4IY8 has some pretty amazing tricks.
🔥 E.g., it offloads vLLM weights to CPU then bring it back. The impl… | 40 | 0 | 0 | 0 | 0 | 0 |
1141052916570214400 | "2025-02-24T13:31:08" | DeepSeek released open-source CUDA kernels optimized for NVIDIA Hopper GPUs: https://t.co/WLiT79ekgK
Soon in vLLM: https://t.co/nNOtmR81Sw | 3 | 0 | 24 | 0 | 8 | 2,281 |
1141052916570214400 | "2025-02-24T13:31:08" | Deespeek released their MLA implementaion, here is how it works 💡
Multi-head Latent Attention (MLA) speeds up LLM inference and reduces memory needs. It uses "low-rank joint compression" to shrink the Key-Value (KV) to reduce memory usage by up to 93.3% and improve throughput… https://t.co/p7FwKhlgC3 https://t.co/CtfVRCULb1 | 106 | 12 | 555 | 7 | 288 | 36,601 |
1141052916570214400 | "2025-02-24T08:38:12" | Open Source Deep Research implementation using @GoogleDeepMind Gemini 2.0 Flash:
1. Query Analysis
2. Query Generation
3. Research Tree Building
4. Deep Research (Comprehensive Mode)
5. Report Generation https://t.co/8sTO6qgkmF | 14 | 1 | 110 | 0 | 58 | 12,625 |
1141052916570214400 | "2025-02-24T08:03:58" | LFG! @GoogleDeepMind Gemini 2.0 Flash (295B) had last week more usage than Claude 3.5 (236.6B) on @OpenRouterAI! 🔥 https://t.co/HrGOntDjut | 5 | 11 | 136 | 2 | 22 | 9,706 |
1141052916570214400 | "2025-02-24T07:31:02" | Repository: https://t.co/AHdWMa6Z3S | 7 | 1 | 49 | 0 | 30 | 5,568 |
1141052916570214400 | "2025-02-24T07:31:01" | Are LLMs ready to replace OCR solutions? Yes, the OmniAI OCR Benchmark compared OCR providers against LLMs across accuracy, cost, and latency metrics showing Multimodal LLMs are not only better, they are also cheaper with @GoogleDeepMind Gemini 2.0 Flash offering the best… https://t.co/zzjPFEkw0j https://t.co/2enphYNd2f | 97 | 34 | 772 | 9 | 672 | 83,303 |
1141052916570214400 | "2025-02-23T15:42:34" | Documentation: https://t.co/n9aQRtvfSJ
Code: https://t.co/FJCdXW3Qex | 1 | 0 | 10 | 0 | 6 | 1,534 |
1141052916570214400 | "2025-02-23T15:42:33" | Are you running open LLMs on @kubernetesio? Then you must take a look at AIBrix! AIBrix is @BytedanceTalk production solution for open LLMs on Kubernetes running @vllm_project.👀 It supports multi-LoRA management, intelligent routing, autoscaling, and fault tolerance
How is… https://t.co/FAYxyVMDCy https://t.co/dLYKkWx1yu | 18 | 4 | 92 | 1 | 62 | 6,764 |
1141052916570214400 | "2025-02-22T09:18:34" | https://t.co/lnSVLaPSO4 | 5 | 1 | 87 | 0 | 67 | 5,557 |
1141052916570214400 | "2025-02-22T09:18:33" | 2B is enough to match Google Translator and GPT-4 Turbo on Translation! https://t.co/84WIQkOyXd | 99 | 17 | 1,150 | 6 | 504 | 107,428 |
1141052916570214400 | "2025-02-21T12:35:11" | RT @patloeber: Google's new AI co-scientist, simply explained:
It already helped advance biomedical research:
🔬Proposed new drugs for bloo… | 11 | 0 | 0 | 0 | 0 | 0 |
1141052916570214400 | "2025-02-21T10:51:59" | @EastlondonDev @DynamicWebPaige Thank you! Investigating. | 0 | 0 | 2 | 0 | 0 | 33 |
1141052916570214400 | "2025-02-21T10:25:43" | @EastlondonDev @DynamicWebPaige Thank you! Trying to reproduce. Do you have any Advanced settings on?
Here is the output i got: "Quantum computing is a type of computation that harnesses the principles of quantum mechanics, like superposition and entanglement, to solve complex problems that are beyond the… https://t.co/rT8fqgVvKi https://t.co/CEFw7pmM5t | 0 | 1 | 1 | 0 | 0 | 66 |
1141052916570214400 | "2025-02-21T09:44:42" | SigLIP 2 blog from @mervenoyann and team to learn more and try it out: https://t.co/Fvn7ZFc5Rc | 0 | 0 | 7 | 0 | 3 | 1,431 |
1141052916570214400 | "2025-02-21T09:41:26" | @EastlondonDev @DynamicWebPaige Hey, that should not be the case. Could you please share the model id, prompt you are using and if tools?
Tested all 4 models and all responded https://t.co/FTW8NcuB6V | 0 | 1 | 1 | 0 | 0 | 37 |
1141052916570214400 | "2025-02-21T09:28:50" | Paper: https://t.co/zWqU8eKyIn
Models: https://t.co/IycMeAMnSo
https://t.co/IycMeAMnSo | 0 | 1 | 9 | 0 | 0 | 1,739 |
1141052916570214400 | "2025-02-21T09:28:50" | One of the best Vision-Language Encoder got an Update! @GoogleDeepMind releases SigLIP 2!
SigLIP 2 merges captioning pretraining, self-supervised learning, and online data curation, and outperforms its previous version in 10+ tasks, with support for flexible resolutions and… https://t.co/qFu7s9dW9y https://t.co/LCAs3JXc2Q | 24 | 10 | 148 | 3 | 51 | 7,601 |
1141052916570214400 | "2025-02-21T08:44:27" | A team at @deepseek_ai plans to open-source 5 repositories next week, one per day. Focused on infrastructure and building blocks of their online services. https://t.co/XFd8vARAIe | 127 | 21 | 1,002 | 14 | 143 | 55,776 |
README.md exists but content is empty.
- Downloads last month
- 51