-
In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss
Paper • 2402.10790 • Published • 42 -
Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Paper • 2408.03314 • Published • 57 -
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Paper • 2403.09629 • Published • 77
Gabriel Pendl
jompaaa
·
AI & ML interests
None yet
Recent Activity
upvoted
an
article
about 2 hours ago
Introducing EuroBERT: A High-Performance Multilingual Encoder Model
liked
a dataset
about 8 hours ago
facebook/natural_reasoning
liked
a dataset
about 11 hours ago
wandb/RAGTruth-processed
Organizations
None yet
Collections
1
datasets
None public yet