-
ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
Paper • 2403.03853 • Published • 61 -
SliceGPT: Compress Large Language Models by Deleting Rows and Columns
Paper • 2401.15024 • Published • 62 -
Your Transformer is Secretly Linear
Paper • 2405.12250 • Published • 136 -
Yi: Open Foundation Models by 01.AI
Paper • 2403.04652 • Published • 59
Collections
Discover the best community collections!
Collections including paper arxiv:2403.04652
-
Yi: Open Foundation Models by 01.AI
Paper • 2403.04652 • Published • 59 -
A Survey on Data Selection for Language Models
Paper • 2402.16827 • Published • 3 -
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Paper • 2402.00159 • Published • 55 -
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 28
-
Scaling Instruction-Finetuned Language Models
Paper • 2210.11416 • Published • 5 -
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Paper • 2312.00752 • Published • 131 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 50 -
Yi: Open Foundation Models by 01.AI
Paper • 2403.04652 • Published • 59
-
Pix2Gif: Motion-Guided Diffusion for GIF Generation
Paper • 2403.04634 • Published • 13 -
StableDrag: Stable Dragging for Point-based Image Editing
Paper • 2403.04437 • Published • 24 -
Teaching Large Language Models to Reason with Reinforcement Learning
Paper • 2403.04642 • Published • 43 -
Yi: Open Foundation Models by 01.AI
Paper • 2403.04652 • Published • 59