This is a curated list of "Continual Learning with Pretrained Models" research which is maintained by danelpeng.
[2025/05/28] Updated with latest papers.
[2025/03/24] Updated with latest papers.
[2024/10/25] Updated with latest papers.
[2024/10/08] Created this repo.
- Survey
- Prompt Based
- Adapter Based
- LoRA Based
- MoE/Ensemble Based
- VLM Based
- LLM Based
- Diffusion Based
- Application
-
When Continue Learning Meets Multimodal Large Language Model: A Survey [Arxiv 2025.02] China Agricultural University, Peking University
-
Achieving Upper Bound Accuracy of Joint Training in Continual Learning [Arxiv 2025.02] University of Illinois Chicago
-
Lifelong Learning of Large Language Model based Agents: A Roadmap [Arxiv 2025.01] South China University of Technology, Mohamed bin Zayed University of Artificial Intelligence, Tencent AI Lab
-
Continual Learning of Large Language Models: A Comprehensive Survey [Arxiv 2024.11] Rutgers University, Google Cloud AI Research
-
Recent Advances of Multimodal Continual Learning: A Comprehensive Survey [Arxiv 2024.10] The Chinese University of Hong Kong, Tsinghua University, University of Illinois Chicago
-
Towards Lifelong Learning of Large Language Models: A Survey [Arxiv 2024.06] South China University of Technology
-
Continual Learning for Large Language Models: A Survey [Arxiv 2024.02] Monash University, Griffith University
-
A Comprehensive Survey of Continual Learning: Theory, Method and Application [TPAMI 2024] Tsinghua University
-
Continual Learning with Pre-Trained Models: A Survey [IJCAI 2024] Nanjing University
-
How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances [EMNLP 2023] University of Technology Sydney, University of Liverpool, University of Wollongong, University College London
-
Prompt-Enhanced: Leveraging language representation for prompt continual learning [NN 2025] Southeast University, Institute of Automation, CAS
-
PrePrompt: Predictive prompting for class incremental learning [Arxiv 2025.05] Institute of Computing Technology, CAS
-
Dynamic Prompt Adjustment for Multi-Label Class-Incremental Learning [Arxiv 2024.12] Anhui University
-
PEARL: Input-Agnostic Prompt Enhancement with Negative Feedback Regulation for Class-Incremental Learning [AAAI 2025] Southeast University
-
CAPrompt: Cyclic Prompt Aggregation for Pre-Trained Model Based Class Incremental Learning [Arxiv 2024.12] Peking University
-
Adaptive Prompting for Continual Relation Extraction: A Within-Task Variance Perspective [AAAI 2025] Vin AI Research
-
Semantic Residual Prompts for Continual Learning [ECCV 2024] University of Modena and Reggio Emilia
-
Dynamically Managing a Prompt Pool via Self-Enhancement in Continual Learning [NeurIPS 2024] Chung-Ang University, German Research Center for Artificial Intelligence
-
Vector Quantization Prompting for Continual Learning [Arxiv 2024.10] Communication University of China, Harbin Institute of Technology, Shenzhen, The Chinese University of Hong Kong
-
Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach [NeurIPS 2024] University of Technology Sydney, Singapore Management University, University of Illinois at Chicago
-
ModalPrompt:Dual-Modality Guided Prompt for Continual Learning of Large Multimodal Models [Arxiv 2024.10] Institute of Automation, CAS
-
Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning [Arxiv 2024.10] VinAI Research, Monash University, Hanoi University of Science and Technolgy, Univesity of Oregon, The University of Texas at Austin
-
LW2G: Learning Whether to Grow for Prompt-based Continual Learning [Arxiv 2024.09] Zhejiang University, Nanjing University
-
Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [ECCV 2024] Tsinghua University, SmartMore, CUHK, HIT(SZ), Meta Reality Labs, HKU
-
Evolving Parameterized Prompt Memory for Continual Learning [AAAI 2024] Xi'an Jiaotong University
-
Generating Prompts in Latent Space for Rehearsal-free Continual Learning [ACMMM 2024] East China Normal University
-
Convolutional Prompting meets Language Models for Continual Learning [CVPR 2024] IIT Kharagpur, IML Amazon India
-
Consistent Prompting for Rehearsal-Free Continual Learning [CVPR 2024] Sun Yat-sen University, HKUST
-
Steering Prototypes with Prompt-tuning for Rehearsal-free Continual Learning [WACV 2024] Rutgers University, Google Research, Google Cloud AI
-
Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality [NeurIPS 2023] Tsinghua-Bosch Joint Center for ML, Tsinghua University
-
When Prompt-based Incremental Learning Does Not Meet Strong Pretraining [ICCV 2023] Sun Yat-sen University, Peng Cheng Laboratory
-
Introducing Language Guidance in Prompt-based Continual Learning [ICCV 2023] RPTU, DFKI, ETH Zurich, TUM, Google
-
Efficient Continual Pre-training for Building Domain Specific Large Language Models [Arxiv 2023.10] UIUC, Amazon
-
MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning [Arxiv 2023.07] ETS Montreal
-
Progressive Prompts: Continual Learning for Language Models [ICLR 2023] University of Toronto & Vector Institute, Meta AI
-
Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning [ICCV 2023] Kyung Hee University
-
Self-regulating Prompts: Foundational Model Adaptation without Forgetting [ICCV 2023] Mohamed bin Zayed University of AI, Australian National University, Linkoping University, University of California, Merced, Google Research
-
Generating Instance-level Prompts for Rehearsal-free Continual Learning [ICCV 2023 (oral)] Seoul National University, NAVER AI Lab, NAVER Cloud, AWS AI Labs
-
CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning [CVPR 2023] Georgia Institute of Technology, MIT-IBM Watson AI Lab, Rice University, IBM Research
-
S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning [NeurIPS 2022] Xi’an Jiaotong University
-
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning [ECCV 2022] Northeastern University, Google Cloud AI, Google Research
-
Learning to Prompt for Continual Learning [CVPR 2022] Northeastern University, Google Cloud AI, Google Research
-
AdaPrefix++: Integrating Adapters, Prefixes and Hypernetwork for Continual Learning [WACV 2025] Indian Institute of Technology Hyderabad
-
CMoA: Contrastive Mixture of Adapters for Generalized Few-Shot Continual Learning [TMM 2025] University of Oulu, TeleAI,
-
Adapter Merging with Centroid Prototype Mapping for Scalable Class-Incremental Learning [Arxiv 2024.12] Chiba University
-
Linked Adapters: Linking Past and Future to Present for Effective Continual Learning [Arxiv 2024.12] Indian Institute of Technology Hyderabad, Swinburne University of Technology
-
Adapter-Enhanced Semantic Prompting for Continual Learning [Arxiv 2024.12] Beijing University of Technology, Macquarie University, University of California at Merced
-
MOS: Model Surgery for Pre-Trained Model-Based Class-Incremental Learning [AAAI 2025] Nanjing University
-
Multilingual Continual Learning using Attention Distillation [HTML 2024] Amazon, India
-
HyperAdapter: Generating Adapters for Pre-Trained Model-Based Continual Learning [Openreview 2024.10] Paper under double-blind review
-
ATLAS: Adapter-Based Multi-Modal Continual Learning with a Two-Stage Learning Strategy [Arxiv 2024.10] Shanghai Jiao Tong University, ShanghaiTech University, Tsinghua University
-
Adaptive Adapter Routing for Long-Tailed Class-Incremental Learning [Arxiv 2024.09] Nanjing University
-
Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models [Arxiv 2024.08] KU Leuven
-
Expand and Merge: Continual Learning with the Guidance of Fixed Text Embedding Space [IJCNN 2024] Sun Yat-sen University
-
Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning [ECCV 2024] Xi’an Jiaotong University
-
Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer [CVPR 2024] Huazhong University of Science and Tech., DAMO Academy, Alibaba Group
-
Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [CVPR 2024] Nanjing University
-
S-LoRA: Scalable Low-Rank Adaptation for Class Incremental Learning [Arxiv 2025.01] City University of Hong Kong, Harvard University, Xi’an Jiaotong University, Tencent AI Lab, Zhejiang University
-
Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA [Arxiv 2024.12] University of New South Wales, CSIRO’s Data61
-
DESIRE: Dynamic Knowledge Consolidation for Rehearsal-Free Continual Learning [Arxiv 2024.11] MAIS, Institute of Automation, Chinese Academy of Sciences
-
Multi-LoRA continual learning based instruction tuning framework for universal information extraction [Knowledge-Based Systems 2025] Nankai University
-
Dual Low-Rank Adaptation for Continual Learning with Pre-Trained Models [Arxiv 2024.10] University of Texas at Austin, SonyAI
-
InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning [CVPR 2024] Nanjing University
-
Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation [NeurIPSW 2024] University of Texas at Austin
-
Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [CVPR 2024] Dalian University of Technology, UESTC, Tsinghua University
-
Continual learning with low rank adaptation [NeurIPSW 2023] Amazon Web Services
-
MINGLE: Mixtures of Null-Space Gated Low-Rank Experts for Test-Time Continual Model Merging [Arxiv 2025.05] University of Electronic Science and Technology of China
-
Training Consistent Mixture-of-Experts-Based Prompt Generator for Continual Learning [AAAI 2025] Northwestern Polytechnical University
-
BECAME: BayEsian Continual Learning with Adaptive Model MErging [Arxiv 2025.04] Shanghai Jiao Tong University
-
A scalable Bayesian continual learning framework for online and sequential decision making [NeurIPSW 2024] University of Oxford
-
CAPrompt: Cyclic Prompt Aggregation for Pre-Trained Model Based Class Incremental Learning [Arxiv 2024.12] Peking University
-
Learning Attentional Mixture of LoRAs for Language Model Continual Learning [Arxiv 2024.09] Nankai University
-
Theory on Mixture-of-Experts in Continual Learning [Arxiv 2024.10] Singapore University of Technology and Design, University of Houston, The Ohio State University
-
Weighted Ensemble Models Are Strong Continual Learners [ECCV 2024] Télécom-Paris, Institut Polytechnique de Paris
-
MagMax: Leveraging Model Merging for Seamless Continual Learning [ECCV 2024] IDEAS NCBR, Warsaw University of Technology
-
Continual Learning with Weight Interpolation [CVPR 2024] Wrocław University of Science and Technology, Rochester Institute of Technology
-
LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models [Arxiv 2024.06] Nanjing University of Aeronautics and Astronautics
-
Mixture of Experts Meets Prompt-Based Continual Learning [Arxiv 2024.05] The University of Texas at Austin, Hanoi University of Science and Technology, VinAI Research
-
Learning More Generalized Experts by Merging Experts in Mixture-of-Experts [Arxiv 2024.05] KAIST
-
MoRAL: MoE Augmented LoRA for LLMs’ Lifelong Learning [Arxiv 2024.02] Provable Responsible AI and Data Analytics (PRADA) Lab, KAUST, University of Macau
-
Divide and not forget: Ensemble of selectively trained experts in Continual Learning [ICLR 2024] IDEAS-NCBR, Warsaw University of Technology
-
Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [CVPR 2024] Dalian University of Technology, UESTC, Tsinghua University
-
An Efficient General-Purpose Modular Vision Model via Multi-Task Heterogeneous Training [Arxiv 2023.06] University of Massachusetts Amherst, University of California Berkeley, MIT-IBM Watson AI Lab
-
Lifelong Language Pretraining with Distribution-Specialized Experts [ICML 2023] The University of Texas at Austin, Google
-
Continual Learning Beyond a Single Model [CoLLAs 2023] Bosch Center for Artificial Intelligence, Washington State University, Apple
-
Mixture-of-Variational-Experts for Continual Learning [Arxiv 2022.03] Ulm University
-
CoSCL: Cooperation of Small Continual Learners is Stronger Than a Big One [ECCV 2022] Tsinghua University
-
Ex-Model: Continual Learning from a Stream of Trained Models [CVPRW 2022] University of Pisa
-
Model Zoo: A Growing "Brain" That Learns Continually [ICLR 2022] University of Pennsylvania
-
Routing Networks with Co-training for Continual Learning [ICMLW 2020] Google AI, Zurich
-
A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning [ICLR 2020] Seoul National University
-
Continual Learning in Task-Oriented Dialogue Systems [Arxiv 2020.12] HKUST, Facebook
-
Beyond CLIP Generalization: Against Forward&Backward Forgetting Adapter for Continual Learning of Vision-Language Models [Arxiv 2025.05] Xi’an Jiaotong University
-
Language Guided Concept Bottleneck Models for Interpretable Continual Learning [Arxiv 2025.03] Institute of Automation, CAS
-
IAP: Improving Continual Learning of Vision-Language Models via Instance-Aware Prompting [Arxiv 2025.03] Zhejiang University
-
Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning [Arxiv 2025.03] Nankai University
-
External Knowledge Injection for CLIP-Based Class-Incremental Learning [Arxiv 2025.03] Nanjing University
-
Enhanced Continual Learning of Vision-Language Models with Model Fusion [ICLRW 2025] Shanghai Jiao Tong University, Tencent
-
Visual Class Incremental Learning with Textual Priors Guidance based on an Adapted Vision-Language Model [TMM 2025] Sun Yat-sen Univerisity
-
Efficient Few-Shot Continual Learning in Vision-Language Models [Arxiv 2025.02] University of Cambridge, Toyota Motor Europe
-
Differentiable Prompt Learning for Vision Language Models [Arxiv 2024.12] Rensselaer Polytechnic Institute, IBM Research
-
How to Merge Your Multimodal Models Over Time? [Arxiv 2024.12] University of T ¨ ubingen, University of Cambridge, Technical University of Munich
-
Exemplar Masking for Multimodal Incremental Learning [Arxiv 2024.12] National Yang Ming Chiao Tung University, Google
-
Retaining and Enhancing Pre-trained Knowledge in Vision-Language Models with Prompt Ensembling [WACV 2025] Seoul National University
-
Continual learning with task specialist [Arxiv 2024.09] International Institute of Information Technology Bangalore, A*STAR
-
A Practitioner’s Guide to Continual Multimodal Pretraining [NeurIPS 2024] University of T¨ubingen, Helmholtz Munich, Munich Center for ML, Google DeepMind
-
CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models [NeurIPS 2024] UNSW Sydney, CSIRO’s Data61
-
Stabilizing Zero-Shot Prediction: A Novel Antidote to Forgetting in Continual Vision-Language Tasks [NeurIPS 2024] National University of Defense Technology, Tsinghua University
-
CLIP with Generative Latent Replay: a Strong Baseline for Incremental Learning [BMVC 2024] University of Modena and Reggio
-
Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models [ECCV 2024] Tsinghua University, SmartMore, CUHK, HIT(SZ), Meta Reality Labs, HKU
-
Anytime Continual Learning for Open Vocabulary Classification [ECCV 2024 (oral)] University of Illinois at Urbana-Champaign
-
Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models [ECCV 2024] National Taiwan University, NVIDIA
-
Adapt without Forgetting: Distill Proximity from Dual Teachers in Vision-Language Models [ECCV 2024] The University of Sydney, Huawei Noah’s Ark Lab
-
Class-Incremental Learning with CLIP: Adaptive Representation Adjustment and Parameter Fusion [ECCV 2024] Nankai University
-
Semantic Residual Prompts for Continual Learning [ECCV 2024] University of Modena and Reggio Emilia
-
Expand and Merge: Continual Learning with the Guidance of Fixed Text Embedding Space [IJCNN 2024] Sun Yat-sen University
-
CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning [Arxiv 2024.05] Northwestern Polytechnical University, Singapore Management University, Zhejiang University, University of Adelaide
-
TiC-CLIP: Continual Training of CLIP Models [ICLR 2024] Apple, Carnegie Mellon University
-
Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [CVPR 2024] Dalian University of Technology, UESTC, Tsinghua University
-
Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners [CVPR 2024] Kyung Hee University, Yonsei University
-
Class Incremental Learning with Pre-trained Vision-Language Models [Arxiv 2023.10] Nankai University, University of Florence
-
MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning [Arxiv 2023.07] ETS Montreal
-
Learning without Forgetting for Vision-Language Models [Arxiv 2023.05] Nanjing University, Nanyang Technological University
-
Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models [ICCV 2023] National University of Singapore, UC Berkeley, The Chinese University of Hong Kong
-
Self-regulating Prompts: Foundational Model Adaptation without Forgetting [ICCV 2023] Mohamed bin Zayed University of AI, Australian National University, Linkoping University, University of California, Merced, Google Research
-
Introducing Language Guidance in Prompt-based Continual Learning [ICCV 2023] RPTU, DFKI, ETH Zurich, TUM, Google
-
Continual Vision-Language Representation Learning with Off-Diagonal Information [ICML 2023] Zhejiang University, Huawei Cloud
-
CLIP model is an Efficient Continual Learner [Arxiv 2022.10] Mohamed bin Zayed University of Artificial Intelligence, Australian National University, Monash University, Linkoping University
-
Don’t Stop Learning: Towards Continual Learning for the CLIP Model [Arxiv 2022.07] Xidian University, University of Adelaide
-
CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks [NeurIPS 2022] University of Southern California,
-
S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning [NeurIPS 2022] Xi’an Jiaotong University
-
Robust Fine-Tuning of Zero-Shot Models [CVPR 2022] University of Washington, OpenAI, Columbia University, Google Research, Brain Team
-
Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning [Arxiv 2025.03] Nankai University
-
External Knowledge Injection for CLIP-Based Class-Incremental Learning [Arxiv 2025.03] Nanjing University
-
DesCLIP: Robust Continual Adaptation via General Attribute Descriptions for Pretrained Vision-Language Models [Arxiv 2025.02] University of Electronic Science and Technology
-
Generative multi-modal models are good class incremental learners [CVPR 2024] Nankai University
-
Diffusion Meets Few-shot Class Incremental Learning [Arxiv 2025.03] NAVER AI Lab
-
Continual learning with task specialist [Arxiv 2024.09] International Institute of Information Technology Bangalore, A*STAR
-
Diffusion Model Meets Non-Exemplar Class-Incremental Learning and Beyond [Arxiv 2024.08] BNRist, Tsinghua University
-
Class-Prototype Conditional Diffusion Model with Gradient Projection for Continual Learning [Arxiv 2024.03] VinAI Research, Monash University
-
Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [ECCV 2024 (oral)] South China University of Technology, HKUST, China University of Petroleum, WeBank, Pazhou Laboratory
-
DiffClass: Diffusion-Based Class Incremental Learning [ECCV 2024] Northeastern University, ETH Zürich
-
GUIDE: Guidance-based Incremental Learning with Diffusion Models [Arxiv 2024.03] Warsaw University of Technology
-
SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection [CVPR 2024] UNIST, LG Electronics, KETI
-
Class-Incremental Learning using Diffusion Model for Distillation and Replay [ICCVW 2023] Tokyo Institute of Technology, Artificial Intelligence Research Center
-
DDGR: Continual Learning with Deep Diffusion-based Generative Replay [ICML 2023] Wuhan University
-
Continuous Subspace Optimization for Continual Learning [Arxiv 2025.05] Nanjing University
-
Task-Core Memory Management and Consolidation for Long-term Continual Learning [Arxiv 2025.05] East China Normal University, Nanyang Technological University, Fudan University
-
Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [Arxiv 2025.05] Nanyang Technological University
-
Brain-Inspired Quantum Neural Architectures for Pattern Recognition: Integrating QSNN and QLSTM [Arxiv 2025.05] University of Granada
-
Mathematics of Continual Learning [Arxiv 2025.04] Michigan State University
-
Audio-Visual Class-Incremental Learning for Fish Feeding Intensity Assessment in Aquaculture [Arxiv 2025.04] University of Surrey, Tianjin University
-
Bayesian continual learning and forgetting in neural networks [Arxiv 2025.04] Universit ́e Paris-Saclay
-
Memory-Statistics Tradeoff in Continual Learning with Structural Regularization [Arxiv 2025.04] Rice University, UC Berkeley, Johns Hopkins University
-
Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language Models [Arxiv 2025.05] CMU
-
Efficient Continual Learning through Frequency Decomposition and Integration [Arxiv 2025.03] ICT, CAS
-
Enhancing Domain-Specific Encoder Models with LLM-Generated Data: How to Leverage Ontologies, and How to Do Without Them [Arxiv 2025.03] Bielefeld University
-
Adaptive Few-Shot Class-Incremental Learning via Latent Variable Models [Arxiv 2025.03] University of Cambridge
-
Continual learning via probabilistic exchangeable sequence modelling [Arxiv 2025.03] University of Oxford
-
Global Convergence of Continual Learning on Non-IID Data [Arxiv 2025.03] Hong Kong Institute of Science & Innovation, CAS
-
Restoring Forgotten Knowledge in Non-Exemplar Class Incremental Learning through Test-Time Semantic Evolution [Arxiv 2025.03] Nankai University
-
Continual Learning Should Move Beyond Incremental Classification [Arxiv 2025.02] Hessian Center for AI
-
Continual learning in the brain [Arxiv 2025] UNIVERSITY OF CALIFORNIA, IRVINE
-
GRAPHMOE: Amplifying Cognitive Depth of Mixture-of-Experts Network via Introducing Self-Rethinking Mechanism [Arxiv 2025.01] Institute for Advanced Algorithms Research, Institute of Computing Technology CAS, Harbin Institute of Technology
-
ZeroFlow: Overcoming Catastrophic Forgetting is Easier than You Think [Arxiv 2025.01] Tsinghua University, DAMO Academy
-
Continual Learning Using a Kernel-Based Method Over Foundation Models [AAAI 2025] University of Illinois Chicago, Intel Labs
-
Memory-efficient Continual Learning with Neural Collapse Contrastive [WACV 2025] Universite d’Orl ´ eans, ETIS - CY Cergy Paris University
-
Mamba-CL: Optimizing Selective State Space Model in Null Space for Continual Learning [Arxiv 2024.11] Xidian University, Northwestern Polytechnical University
-
Model Sensitivity Aware Continual Learning [NeurIPS 2024] University of Maryland College Park
-
Random Representations Outperform Online Continually Learned Representations [NeurIPS 2024] University of Oxford, IIIT Hyderabad, Apple
-
Sparse Orthogonal Parameters Tuning for Continual Learning [Arxiv 2024.11] Peking University, Shenzhen
-
Computationally Budgeted Continual Learning: What Does Matter? [CoLLAs 2024] University of T ¨ ubingen, Helmholtz Munich, Google DeepMind, TU Munich
-
Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models [CVPR 2023] University of Oxford, KAUST, Meta AI
-
Continual Robot Learning via Language-Guided Skill Acquisition [ICRAW 2025] Georgia Institute of Technology
-
SPECI: Skill Prompts based Hierarchical Continual Imitation Learning for Robot Manipulation [Arxiv 2025.04] Institute of Automation, CAS
-
Few-Shot Vision-Language Action-Incremental Policy Learning [Arxiv 2025.04] Harbin Institute of Technology
-
Think Small, Act Big: Primitive Prompt Learning for Lifelong Robot Manipulation [Arxiv 2025.04] Shanghai AI Laboratory
-
Replay4NCL: An Efficient Memory Replay-based Methodology for Neuromorphic Continual Learning in Embedded AI Systems [Design Automation Conference 2025.03] United Arab Emirates University
-
Efficient Continual Adaptation of Pretrained Robotic Policy with Online Meta-Learned Adapters [Arxiv 2025.03] King’s College London
-
iManip: Skill-Incremental Learning for Robotic Manipulation [Arxiv 2025.03] Sun Yat-sen University
-
Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation [NeurIPS 2024] Sungkyunkwan University, Carnegie Mellon University
-
Incremental Learning for Robot Shared Autonomy [Arxiv 2024.10] Robotics Institute, Carnegie Mellon University
-
Task-unaware Lifelong Robot Learning with Retrieval-based Weighted Local Adaptation [Arxiv 2024.10] TU Delft, Booking.com, UCSD
-
Vision-Language Navigation with Continual Learning [Arxiv 2024.09] Institute of Automation, Chinese Academy of Science
-
Continual Vision-and-Language Navigation [Arxiv 2024.03] Seoul National University
-
Online Continual Learning For Interactive Instruction Following Agents [ICLR 2024] Yonsei University, Seoul National University
-
VOYAGER: An Open-Ended Embodied Agent with Large Language Models [NeurIPSW 2023] NVIDIA, Caltech, UT Austin, Stanford, UW Madison
-
CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents [CoLLAs 2022] Carnegie Mellon University, Georgia Institute of Technology, Allen Institute for AI
-
Stable Continual Reinforcement Learning via Diffusion-based Trajectory Replay [ICLRW 2024] Nanjing University
-
Evaluations of the Gap between Supervised and Reinforcement Lifelong Learning on Robotic Manipulation Tasks [CoRL 2022] Tsinghua University
-
Towards continual reinforcement learning: A review and perspectives [JAIR 2022] McGill University, Universit´e de Montr´eal, IBM Research, DeepMind
-
Lifelong Learning with a Changing Action Set [AAAI 2020] University of Massachusetts Amherst, Adobe Research
-
Continual Reinforcement Learning in 3D Non-stationary Environments [CVPRW 2020] University of Bologna, University of Michigan, Purdue University
-
Continual Reinforcement Learning in 3D Non-stationary Environments [CVPRW 2020] University of Bologna, University of Michigan, Purdue University
-
Continual Reinforcement Learning deployed in Real-life using Policy Distillation and Sim2Real Transfer [ICMLW 2019] INRIA, AI Lab, Softbank Robotics Europe , Theresis Lab, Thales
-
T2I-ConBench: Text-to-Image Benchmark for Continual Post-training [Arxiv 2025.05] Shanghai Jiao Tong University, Huawei
-
Gated Integration of Low-Rank Adaptation for Continual Learning of Language Models [Arxiv 2025.05] Nanjing University
-
ULTRAEDIT: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models [Arxiv 2025.05] Independent, HKUST, The Ohio State University
-
Memorization and Knowledge Injection in Gated LLMs [Arxiv 2025.04] Harvard University
-
What Causes Knowledge Loss in Multilingual Language Models? [Arxiv 2025.04] Institut Teknologi Bandung
-
Prototype Conditioned Generative Replay for Continual Learning in NLP [ACL 2025] HKUST
-
Continual Pre-Training is (not) What You Need in Domain Adaption [Arxiv 2025.04] National Taiwan University, NVIDIA
-
Teaching Large Language Models to Reason through Learning and Forgetting [Arxiv 2025.04] Universit ́e de Montr ́eal, Amazon Web Services
-
Sculpting Subspaces: Constrained Full Fine-Tuning in LLMs for Continual Learning [Arxiv 2025.04] Red Hat AI Innovation, MIT-IBM Watson AI Lab
-
LoRASculpt: Sculpting LoRA for Harmonizing General and Specialized Knowledge in Multimodal Large Language Models [Arxiv 2025.03] Wuhan University
-
Recurrent Knowledge Identification and Fusion for Language Model Continual Learning [Arxiv 2025.02] The Hong Kong Polytechnic University, Tsinghua University, Peking University, Huawei Hong Kong Research Center, University of Illinois at Chicago
-
From RAG to Memory: Non-Parametric Continual Learning for Large Language Models [Arxiv 2025.02] The Ohio State University, University of Illinois Urbana-Champaign
-
Bring Your Own Knowledge: A Survey of Methods for LLM Knowledge Expansion [Arxiv 2025.02] Bosch Center for Artificial Intelligence
-
DATA: Decomposed Attention-based Task Adaptation for Rehearsal-Free Continual Learning [Arxiv 2025.02] Institute of Automation, Chinese Academy of Sciences
-
Mitigating Visual Knowledge Forgetting in MLLM Instruction-tuning via Modality-decoupled Gradient Descent [Arxiv 2025.02] UC San Diego
-
Continual LLaVA: Continual Instruction Tuning in Large Vision-Language Models [Arxiv 2024.11] Mohamed bin Zayed University of Artificial Intelligence, MEGVII Technolog, Fudan University, Sun Yat-sen University
-
LLMs Can Evolve Continually on Modality for X-Modal Reasoning [Arxiv 2024.10] Dalian University of Technology, Huawei Noah’s Ark Lab, Tsinghua University, HKUST
-
LLaCA: Multimodal Large Language Continual Assistant [Arxiv 2024.10] East China Normal University, Xiamen University, Tencent YouTu Lab
-
Is Parameter Collision Hindering Continual Learning in LLMs? [Arxiv 2024.10] Peking University, DAMO Academy
-
CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language Model [Arxiv 2024.10] UESTC, Tongji University
-
ModalPrompt:Dual-Modality Guided Prompt for Continual Learning of Large Multimodal Models [Arxiv 2024.10] Institute of Automation, CAS
-
Learning Attentional Mixture of LoRAs for Language Model Continual Learning[Arxiv 2024.09] Nankai University
-
LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training[Arxiv 2024.06] Soochow University, Shanghai AI Laboratory, Shanghai Jiao Tong University, Fudan University, CUHK
-
LLAMA PRO: Progressive LLaMA with Block Expansion[Arxiv 2024.05] The University of Hong Kong, Tencent PCG, Shanghai Jiao Tong University , Beijing Language and Culture University
-
COPR: Continual Learning Human Preference through Optimal Policy Regularization[Arxiv 2024.05] Harbin Institute of Technology (Shenzhen), Peng Cheng Laboratory, KCL
-
CPPO: Continual Learning for Reinforcement Learning with Human Feedback [ICLR 2024] Harbin Institute of Technology (Shenzhen), Peng Cheng Laboratory, KCL
-
Empowering Large Language Model for Continual Video Question Answering with Collaborative Prompting [EMNLP 2024] Nanyang Technological University
-
Trace: A comprehensive benchmark for continual learning in large language models [Arxiv 2023.10] Fudan University, UCSB, Shanghai AI Laboratory
-
ConPET: Continual Parameter-Efficient Tuning for Large Language Models [Arxiv 2023.09] Tsinghua University, Tencent
-
Continual Pre-Training of Large Language Models: How to (re)warm your model? [ICML 2023] Universite de Montr ´ eal
-
Lifelong Language Pretraining with Distribution-Specialized Experts [ICML 2023] The University of Texas at Austin, Google
-
Exploring Continual Learning for Code Generation Models [ACL 2023] University of North Carolina, AWS AI Labs, Amazon Alexa AI
-
Drinking from a firehose: Continual learning with web-scale natural language [TPAMI 2023] University of Southern California, Intel Labs
-
CITB: A Benchmark for Continual Instruction Tuning [EMNLP 2023] University of Technology Sydney, University of Liverpool, University of Wollongong
-
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models [EMNLP 2022] KAIST, LG AI Research, Korea University
-
Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora [NAACL 2022] University of Southern California, AWS AI Labs
-
Towards continual knowledge learning of language models [ICLR 2022] KAIST, LG AI Research
-
ConTinTin: Continual Learning from Task Instructions [ACL 2022] Temple University, Salesforce Research
-
TimeLMs: Diachronic Language Models from Twitter [ACL 2022] University of Porto, Snap, Cardiff University
-
Dynamic language models for continuously evolving content [KDD 2021] Google Research
-
Reward Incremental Learning in Text-to-Image Generation [Arxiv 2024.11] CyberAgent, The University of Tokyo
-
Incremental Image Generation with Diffusion Models by Label Embedding Initialization and Fusion [ACMMM 2024] Nanjing University, Tencent AI Lab
-
Assessing Open-world Forgetting in Generative Image Model Customization [Arxiv 2024.10] Computer Vision Center, Universitat Autonoma de Barcelona
-
Low-Rank Continual Personalization of Diffusion Models [Arxiv 2024.10] Warsaw University of Technology
-
Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters [CVPRW 2024] Samsung Research America, Georgia Institute of Technology
-
Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA [TMLR 2024] Samsung Research America, Georgia Institute of Technology
-
Continual Learning of Diffusion Models with Generative Distillation [CoLLAs 2024] Master in Computer Vision (Barcelona), Apple, KU Leuven
-
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer [Arxiv 2025.05] Nokia Bell Labs,
-
MetaCLBench: Meta Continual Learning Benchmark on Resource-Constrained Edge Devices [Arxiv 2025.03] HKUST, University of Cambridge, Samsung AI Center
-
Conditioned Prompt-Optimization for Continual Deepfake Detection [ICPR 2024] University of Trento, Fondazione Bruno Kessler
-
A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials [WACV 2023] ETH Zurich, Singapore Management University, Xi’an Jiaotong University, Harbin Institute of Technology, KU Leuven
- Figure from this URL: Lifelong learning? Part-time undergraduate provision is in crisis.