azaad
's Collections
To Read
updated
Watermarking Makes Language Models Radioactive
Paper
•
2402.14904
•
Published
•
23
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and
Two-Phase Partition
Paper
•
2402.15220
•
Published
•
19
GPTVQ: The Blessing of Dimensionality for LLM Quantization
Paper
•
2402.15319
•
Published
•
19
DiLightNet: Fine-grained Lighting Control for Diffusion-based Image
Generation
Paper
•
2402.11929
•
Published
•
10
Coercing LLMs to do and reveal (almost) anything
Paper
•
2402.14020
•
Published
•
12
How to Train Data-Efficient LLMs
Paper
•
2402.09668
•
Published
•
40
MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT
Paper
•
2402.16840
•
Published
•
23
Greedy Growing Enables High-Resolution Pixel-Based Diffusion Models
Paper
•
2405.16759
•
Published
•
7
Trans-LoRA: towards data-free Transferable Parameter
Efficient Finetuning
Paper
•
2405.17258
•
Published
•
14
Perplexed by Perplexity: Perplexity-Based Data Pruning With Small
Reference Models
Paper
•
2405.20541
•
Published
•
22
Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated
Parameters
Paper
•
2406.05955
•
Published
•
23