diff --git a/embedding-plot.html b/embedding-plot.html new file mode 100644 index 0000000..5727092 --- /dev/null +++ b/embedding-plot.html @@ -0,0 +1,67 @@ + + + +
+
+ + \ No newline at end of file diff --git a/papers.md b/papers.md new file mode 100644 index 0000000..bb86dc6 --- /dev/null +++ b/papers.md @@ -0,0 +1,20398 @@ +--- +title: "Research" +layout: page +--- +## Papers graph + +{% include embedding-plot.html %} + +## Publications + +In this section we maintain an updated list of publications related to Continual Learning. +This references list is automatically generated by a single bibtex file maintained +by the ContinualAI community through an open Mendeley group! Join our group [here](https://www.mendeley.com/community/continual-learning-papers/?__cf_chl_captcha_tk__=d4a16b2e7ba082bc24fbb7fb7cbba3149969ff33-1589287156-0-Aa1Wr5LQkCQwqaFz3Ho_5lc1NnR1Dn6bDEe8fZlbjwIKIQy-b28wKYYcbcdksrP0zP2e8x1BfyD3V0eiZWMVdFQ0AqGzm8qHQYklAGUPz0COhkQec_hu0O1_XFh7PtHXNKfIiyBb9TppP05KlSNIIxJk2u7lNAlGw1pWscPNhIvk_4p-5XDf-YFu3HpCDYN1IQ7bQgkGqMRYAdYtZS7gq1C_w6iykd2sA6IawsIbaCtdW08H77e-7T7rEdo91HndXMIJgV5UQBnJSwRHOl-g-8EKrUWUDHBdGQgLhiJli4y16AAGu979jkOyhtS7onFfRNXdUELb3pOiD0YS5zCnmHM6PURblRyb6HA2ma7f0JIC8DIjmK2xCcRlYqgiNrWVS3oEbS6uqn63IdxYgoSLq6vo68mS1e_Or8LGRpOE8uemjJfbVnPR4RI3mqevN5OxbgWz-CYkElgLAXeaEFqVitVCsaEmDygdit6flohhCpCd5vVs6gv1t_ALu6Q7nZIbFc386zRcqDb-MhIV7BpRIOA) to add a reference to your paper! Please, remember to follow the (very simple) [contributions guidelines](https://github.com/ContinualAI/wiki#how-to-contribute-to-the-continualai-database-of-publications) when adding new papers. + +**Search among 301 papers!** + +**Filter list by keyword:**
+**Filter list by regex:**
+**Filter list by year:** + +[framework] [som] [sparsity] [dual] [spiking] [rnn] [nlp] [graph] [vision] [hebbian] [audio] [bayes] [generative] [mnist] [fashion] [cifar] [core50] [imagenet] [omniglot] [cubs] [experimental] [theoretical] + +Applications +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**22 papers** + +In this section we maintain a list of all applicative papers produced on continual learning and related topics. + +- `Findings of the First Shared Task on Lifelong Learning Machine Translation `__ by Loïc Barrault, Magdalena Biesialska, Marta R. Costa-jussà, Fethi Bougares and Olivier Galibert. *Proceedings of the Fifth Conference on Machine Translation*, 56--64, 2020. [framework] [nlp] |barrault2020Applications| +- `Continual Learning of Predictive Models in Video Sequences via Variational Autoencoders `__ by Damian Campo, Giulia Slavic, Mohamad Baydoun, Lucio Marcenaro and Carlo Regazzoni. *arXiv*, 2020. [vision] |campo2020Applications| +- `Unsupervised Model Personalization While Preserving Privacy and Scalability: An Open Problem `__ by Matthias De Lange, Xu Jia, Sarah Parisot, Ales Leonardis, Gregory Slabaugh and Tinne Tuytelaars. *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, 14451--14460, 2020. [framework] [mnist] [vision] |delange2020Applications| +- `Incremental Learning for End-to-End Automatic Speech Recognition `__ by Li Fu, Xiaoxiao Li and Libo Zi. *arXiv*, 2020. [audio] |fu2020Applications| +- `Neural Topic Modeling with Continual Lifelong Learning `__ by Pankaj Gupta, Yatin Chaudhary, Thomas Runkler and Hinrich Schütze. *ICML*, 2020. [nlp] |gupta2020Applications| +- `CLOPS: Continual Learning of Physiological Signals `__ by Dani Kiyasseh, Tingting Zhu and David A Clifton. *arXiv*, 2020. |kiyasseh2020Applications| +- `Clinical Applications of Continual Learning Machine Learning `__ by Cecilia S Lee and Aaron Y Lee. *The Lancet Digital Health*, e279--e281, 2020. |lee2020Applications| +- `Continual Learning for Domain Adaptation in Chest X-Ray Classification `__ by Matthias Lenga, Heinrich Schulz and Axel Saalbach. *arXiv*, 1--11, 2020. [vision] |lenga2020Applications| +- `Sequential Domain Adaptation through Elastic Weight Consolidation for Sentiment Analysis `__ by Avinash Madasu and Vijjini Anvesh Rao. *arXiv*, 2020. [nlp] [rnn] |madasu2020Applications| +- `Importance Driven Continual Learning for Segmentation Across Domains `__ by Sinan Özgür Özgün, Anne-Marie Rickmann, Abhijit Guha Roy and Christian Wachinger. *arXiv*, 1--10, 2020. [vision] |ozgun2020Applications| +- `LAMOL: LAnguage MOdeling for Lifelong Language Learning `__ by Fan-Keng Sun, Cheng-Hao Ho and Hung-Yi Lee. *ICLR*, 2020. [nlp] |sun2020Applications| +- `Non-Parametric Adaptation for Neural Machine Translation `__ by Ankur Bapna and Orhan Firat. *arXiv*, 2019. [nlp] |bapna2019Applications| +- `Episodic Memory in Lifelong Language Learning `__ by Cyprien de Masson D'Autume, Sebastian Ruder, Lingpeng Kong and Dani Yogatama. *NeurIPS*, 2019. [nlp] |dautume2019Applications| +- `Continual Adaptation for Efficient Machine Communication `__ by Robert D Hawkins, Minae Kwon, Dorsa Sadigh and Noah D Goodman. *Proceedings of the ICML Workshop on Adaptive & Multitask Learning: Algorithms & Systems*, 2019. |hawkins2019Applications| +- `Continual Learning for Sentence Representations Using Conceptors `__ by Tianlin Liu, Lyle Ungar and João Sedoc. *NAACL*, 2019. [nlp] |liu2019Applications| +- `Lifelong and Interactive Learning of Factual Knowledge in Dialogues `__ by Sahisnu Mazumder, Bing Liu, Shuai Wang and Nianzu Ma. *Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue*, 21--31, 2019. [nlp] |mazumder2019Applications| +- `Making Good on LSTMs' Unfulfilled Promise `__ by Daniel Philps, Artur d'Avila Garcez and Tillman Weyde. *arXiv*, 2019. [rnn] |philps2019Applications| +- `Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation `__ by Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh and Philipp Koehn. *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, 2062--2068, 2019. [nlp] [rnn] |thompson2019aApplications| +- `Lifelong Learning for Scene Recognition in Remote Sensing Images `__ by Min Zhai, Huaping Liu and Fuchun Sun. *IEEE Geoscience and Remote Sensing Letters*, 1472--1476, 2019. [vision] |zhai2019Applications| +- `Towards Continual Learning in Medical Imaging `__ by Chaitanya Baweja, Ben Glocker and Konstantinos Kamnitsas. *NeurIPS Workshop on Continual Learning*, 1--4, 2018. [vision] |baweja2018Applications| +- `Toward Continual Learning for Conversational Agents `__ by and Sungjin Lee. *arXiv*, 2018. [nlp] |lee2018Applications| +- `Principles of Lifelong Learning for Predictive User Modeling `__ by Ashish Kapoor and Eric Horvitz. *User Modeling 2007*, 37--46, 2009. |kapoor2009Applications| + +Architectural Methods +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**26 papers** + +In this section we collect all the papers introducing a continual learning strategy employing some architectural methods. + +- `Continual Learning with Adaptive Weights (CLAW) `__ by Tameem Adel, Han Zhao and Richard E Turner. *International Conference on Learning Representations*, 2020. [cifar] [mnist] [omniglot] |adel2020Architectural_Methods| +- `Continual Learning with Gated Incremental Memories for Sequential Data Processing `__ by Andrea Cossu, Antonio Carta and Davide Bacciu. *Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN 2020)*, 2020. [mnist] [rnn] |cossu2020Architectural_Methods| +- `Continual Learning in Recurrent Neural Networks `__ by Benjamin Ehret, Christian Henning, Maria Cervera, Alexander Meulemans, Johannes Von Oswald and Benjamin F. Grewe. *International Conference on Learning Representations*, 2020. [audio] [rnn] |ehret2020Architectural_Methods| +- `Bayesian Nonparametric Weight Factorization for Continual Learning `__ by Nikhil Mehta, Kevin J Liang and Lawrence Carin. *arXiv*, 1--17, 2020. [bayes] [cifar] [mnist] [sparsity] |mehta2020Architectural_Methods| +- `SpaceNet: Make Free Space For Continual Learning `__ by Ghada Sokar, Decebal Constantin Mocanu and Mykola Pechenizkiy. *arXiv*, 2020. [cifar] [fashion] [mnist] [sparsity] |sokar2020Architectural_Methods| +- `Efficient Continual Learning with Modular Networks and Task-Driven Priors `__ by Tom Veniat, Ludovic Denoyer and Marc'Aurelio Ranzato. *arXiv*, 2020. [experimental] |veniat2020Architectural_Methods| +- `Progressive Memory Banks for Incremental Domain Adaptation `__ by Nabiha Asghar, Lili Mou, Kira A Selby, Kevin D Pantasdo, Pascal Poupart and Xin Jiang. *International Conference on Learning Representations*, 2019. [nlp] [rnn] |asghar2019Architectural_Methods| +- `Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments `__ by Andri Ashfahani and Mahardhika Pratama. *Proceedings of the 2019 SIAM International Conference on Data Mining*, 666--674, 2019. [mnist] |ashfahani2019Architectural_Methods| +- `Compacting, Picking and Growing for Unforgetting Continual Learning `__ by Steven C Y Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan and Chu-Song Chen. *NeurIPS*, 13669--13679, 2019. [cifar] [imagenet] |hung2019Architectural_Methods| +- `Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting `__ by Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher and Caiming Xiong. *arXiv*, 2019. [cifar] [mnist] |li2019aArchitectural_Methods| +- `Towards AutoML in the Presence of Drift: First Results `__ by Jorge G. Madrid, Hugo Jair Escalante, Eduardo F. Morales, Wei-Wei Tu, Yang Yu, Lisheng Sun-Hosoya, Isabelle Guyon and Michele Sebag. *arXiv*, 2019. |madrid2019Architectural_Methods| +- `Continual Unsupervised Representation Learning `__ by Dushyant Rao, Francesco Visin, Andrei A Rusu, Yee Whye Teh, Razvan Pascanu and Raia Hadsell. *NeurIPS*, 2019. [mnist] [omniglot] |rao2019Architectural_Methods| +- `A Progressive Model to Enable Continual Learning for Semantic Slot Filling `__ by Yilin Shen, Xiangyu Zeng and Hongxia Jin. *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing*, 1279--1284, 2019. [nlp] |shen2019Architectural_Methods| +- `Adaptive Compression-Based Lifelong Learning `__ by Shivangi Srivastava, Maxim Berman, Matthew B Blaschko and Devis Tuia. *BMVC*, 2019. [imagenet] [sparsity] |srivastava2019Architectural_Methods| +- `Frosting Weights for Better Continual Training `__ by Xiaofeng Zhu, Feng Liu, Goce Trajcevski and Dingding Wang. *2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)*, 506--510, 2019. [cifar] [mnist] |zhu2019Architectural_Methods| +- `Dynamic Few-Shot Visual Learning Without Forgetting `__ by Spyros Gidaris and Nikos Komodakis. *Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition*, 4367--4375, 2018. [imagenet] [vision] |gidaris2018Architectural_Methods| +- `HOUDINI: Lifelong Learning as Program Synthesis `__ by Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton and Swarat Chaudhuri. *NeurIPS*, 8687--8698, 2018. |valkov2018Architectural_Methods| +- `Reinforced Continual Learning `__ by Ju Xu and Zhanxing Zhu. *Advances in Neural Information Processing Systems*, 899--908, 2018. [cifar] [mnist] |xu2018Architectural_Methods| +- `Lifelong Learning With Dynamically Expandable Networks `__ by Jaehong Yoon, Eunho Yang, Jeongtae Lee and Sung Ju Hwang. *ICLR*, 11, 2018. [cifar] [mnist] [sparsity] |yoon2018Architectural_Methods| +- `Expert Gate: Lifelong Learning with a Network of Experts `__ by Rahaf Aljundi, Punarjay Chakravarty and Tinne Tuytelaars. *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2017. [vision] |aljundi2017Architectural_Methods| +- `Neurogenesis Deep Learning `__ by Timothy John Draelos, Nadine E Miner, Christopher Lamb, Jonathan A Cox, Craig Michael Vineyard, Kristofor David Carlson, William Mark Severa, Conrad D James and James Bradley Aimone. *IJCNN*, 2017. [mnist] |draelos2017Architectural_Methods| +- `Net2Net: Accelerating Learning via Knowledge Transfer `__ by Tianqi Chen, Ian Goodfellow and Jonathon Shlens. *ICLR*, 2016. |chen2016Architectural_Methods| +- `Continual Learning through Evolvable Neural Turing Machines `__ by Benno Luders, Mikkel Schlager and Sebastian Risi. *NIPS 2016 Workshop on Continual Learning and Deep Networks*, 2016. |luders2016Architectural_Methods| +- `Progressive Neural Networks `__ by Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu and Raia Hadsell. *arXiv*, 2016. [mnist] |rusu2016Architectural_Methods| +- `Knowledge Transfer in Deep Block-Modular Neural Networks `__ by Alexander V. Terekhov, Guglielmo Montone and J. Kevin O'Regan. *Conference on Biomimetic and Biohybrid Systems*, 268--279, 2015. [vision] |terekhov2015Architectural_Methods| +- `A Self-Organising Network That Grows When Required `__ by Stephen Marsland, Jonathan Shapiro and Ulrich Nehmzow. *Neural Networks*, 1041--1058, 2002. [som] |marsland2002Architectural_Methods| + +Benchmarks +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**5 papers** + +In this section we list all the papers related to new benchmarks proposals for continual learning and related topics. + +- `Defining Benchmarks for Continual Few-Shot Learning `__ by Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal and Amos Storkey. *arXiv*, 2020. [imagenet] |antoniou2020Benchmarks| +- `Evaluating Online Continual Learning with CALM `__ by Germán Kruszewski, Ionut-Teodor Sorodoc and Tomas Mikolov. *arXiv*, 2020. [nlp] [rnn] |kruszewski2020aBenchmarks| +- `Continual Reinforcement Learning in 3D Non-Stationary Environments `__ by Vincenzo Lomonaco, Karan Desai, Eugenio Culurciello and Davide Maltoni. *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops*, 248--249, 2020. |lomonaco2020Benchmarks| +- `OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning `__ by Qi She, Fan Feng, Xinyue Hao, Qihan Yang, Chuanlin Lan, Vincenzo Lomonaco, Xuesong Shi, Zhengwei Wang, Yao Guo, Yimin Zhang, Fei Qiao and Rosa H M Chan. *arXiv*, 1--8, 2019. [vision] |she2019Benchmarks| +- `CORe50: A New Dataset and Benchmark for Continuous Object Recognition `__ by Vincenzo Lomonaco and Davide Maltoni. *Proceedings of the 1st Annual Conference on Robot Learning*, 17--26, 2017. [vision] |lomonaco2017Benchmarks| + +Bioinspired Methods +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**22 papers** + +In this section we list all the papers related to bioinspired continual learning approaches. + +- `Controlled Forgetting: Targeted Stimulation and Dopaminergic Plasticity Modulation for Unsupervised Lifelong Learning in Spiking Neural Networks `__ by Jason M. Allred and Kaushik Roy. *Frontiers in Neuroscience*, 7, 2020. [spiking] |allred2020Bioinspired_Methods| +- `Cognitively-Inspired Model for Incremental Learning Using a Few Examples `__ by A. Ayub and A. R. Wagner. *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops*, 2020. [cifar] [cubs] [dual] |ayub2020Bioinspired_Methods| +- `Storing Encoded Episodes as Concepts for Continual Learning `__ by Ali Ayub and Alan R. Wagner. *arXiv*, 2020. [generative] [imagenet] [mnist] |ayub2020aBioinspired_Methods| +- `Spiking Neural Predictive Coding for Continual Learning from Data Streams `__ by and Alexander Ororbia. *arXiv*, 2020. [spiking] |ororbia2020Bioinspired_Methods| +- `Brain-like Replay for Continual Learning with Artificial Neural Networks `__ by Gido M. van de Ven, Hava T. Siegelmann and Andreas S. Tolias. *International Conference on Learning Representations (Workshop on Bridging AI and Cognitive Science)*, 2020. [cifar] |vandeven2020aBioinspired_Methods| +- `Selfless Sequential Learning `__ by Rahaf Aljundi, Marcus Rohrbach and Tinne Tuytelaars. *ICLR*, 2019. [cifar] [mnist] [sparsity] |aljundi2019cBioinspired_Methods| +- `Backpropamine: Training Self-Modifying Neural Networks with Differentiable Neuromodulated Plasticity `__ by Thomas Miconi, Aditya Rawal, Jeff Clune and Kenneth O Stanley. *ICLR*, 2019. |miconi2019Bioinspired_Methods| +- `Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations `__ by Alexander Ororbia, Ankur Mali, C Lee Giles and Daniel Kifer. *arXiv*, 2019. [mnist] [rnn] [spiking] |ororbia2019Bioinspired_Methods| +- `Lifelong Neural Predictive Coding: Sparsity Yields Less Forgetting When Learning Cumulatively `__ by Alexander Ororbia, Ankur Mali, Daniel Kifer and C Lee Giles. *arXiv*, 1--11, 2019. [fashion] [mnist] [sparsity] |ororbia2019aBioinspired_Methods| +- `FearNet: Brain-Inspired Model for Incremental Learning `__ by Ronald Kemker and Christopher Kanan. *ICLR*, 2018. [audio] [cifar] [generative] |kemker2018Bioinspired_Methods| +- `Differentiable Plasticity: Training Plastic Neural Networks with Backpropagation `__ by Thomas Miconi, Kenneth Stanley and Jeff Clune. *International Conference on Machine Learning*, 3559--3568, 2018. |miconi2018Bioinspired_Methods| +- `Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization `__ by German I Parisi, Jun Tani, Cornelius Weber and Stefan Wermter. *Frontiers in Neurorobotics*, 2018. [core50] [dual] [rnn] [som] |parisi2018Bioinspired_Methods| +- `SLAYER: Spike Layer Error Reassignment in Time `__ by Sumit Bam Shrestha and Garrick Orchard. *Advances in Neural Information Processing Systems 31*, 1412--1421, 2018. |shrestha2018Bioinspired_Methods| +- `Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing World `__ by Sahil Garg, Irina Rish, Guillermo Cecchi and Aurelie Lozano. *IJCAI International Joint Conference on Artificial Intelligence*, 1696--1702, 2017. [nlp] [vision] |garg2017Bioinspired_Methods| +- `Diffusion-Based Neuromodulation Can Eliminate Catastrophic Forgetting in Simple Neural Networks `__ by Roby Velez and Jeff Clune. *PLoS ONE*, 1--31, 2017. |velez2017Bioinspired_Methods| +- `How Do Neurons Operate on Sparse Distributed Representations? A Mathematical Theory of Sparsity, Neurons and Active Dendrites `__ by Subutai Ahmad and Jeff Hawkins. *arXiv*, 1--23, 2016. [hebbian] [sparsity] |ahmad2016Bioinspired_Methods| +- `Continuous Online Sequence Learning with an Unsupervised Neural Network Model `__ by Yuwei Cui, Subutai Ahmad and Jeff Hawkins. *Neural Computation*, 2474--2504, 2016. [spiking] |cui2016Bioinspired_Methods| +- `Backpropagation of Hebbian Plasticity for Continual Learning `__ by and Thomas Miconi. *NIPS Workshop - Continual Learning*, 5, 2016. |miconi2016Bioinspired_Methods| +- `Mitigation of Catastrophic Forgetting in Recurrent Neural Networks Using a Fixed Expansion Layer `__ by Robert Coop and Itamar Arel. *The 2013 International Joint Conference on Neural Networks (IJCNN)*, 1--7, 2013. [mnist] [rnn] [sparsity] |coop2013Bioinspired_Methods| +- `Compete to Compute `__ by Rupesh Kumar Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez and Jürgen Schmidhuber. *Advances in Neural Information Processing Systems 26*, 2013. [mnist] [sparsity] |srivastava2013Bioinspired_Methods| +- `Mitigation of Catastrophic Interference in Neural Networks Using a Fixed Expansion Layer `__ by Robert Coop and Itamar Arel. *2012 IEEE 55th International Midwest Symposium on Circuits and Systems (MWSCAS)*, 726--729, 2012. [sparsity] |coop2012Bioinspired_Methods| +- `Synaptic Plasticity: Taming the Beast `__ by L F Abbott and Sacha B Nelson. *Nature Neuroscience*, 1178--1183, 2000. [hebbian] |abbott2000Bioinspired_Methods| + +Catastrophic Forgetting Studies +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**9 papers** + +In this section we list all the major contributions trying to understand catastrophic forgetting and its implication in machines that learn continually. + +- `Sequential Mastery of Multiple Visual Tasks: Networks Naturally Learn to Learn and Forget to Forget `__ by Guy Davidson and Michael C Mozer. *CVPR*, 9282--9293, 2020. [vision] |davidson2020Catastrophic_Forgetting_Studies| +- `Dissecting Catastrophic Forgetting in Continual Learning by Deep Visualization `__ by Giang Nguyen, Shuan Chen, Thao Do, Tae Joon Jun, Ho-Jin Choi and Daeyoung Kim. *arXiv*, 2020. [vision] |nguyen2020Catastrophic_Forgetting_Studies| +- `Toward Understanding Catastrophic Forgetting in Continual Learning `__ by Cuong V Nguyen, Alessandro Achille, Michael Lam, Tal Hassner, Vijay Mahadevan and Stefano Soatto. *arXiv*, 2019. [cifar] [mnist] |nguyen2019aCatastrophic_Forgetting_Studies| +- `An Empirical Study of Example Forgetting during Deep Neural Network Learning `__ by Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio and Geoffrey J Gordon. *International Conference on Learning Representations*, 2019. [cifar] [mnist] |toneva2019Catastrophic_Forgetting_Studies| +- `Localizing Catastrophic Forgetting in Neural Networks `__ by Felix Wiewel and Bin Yang. *arXiv*, 2019. [mnist] |wiewel2019Catastrophic_Forgetting_Studies| +- `Don't Forget, There Is More than Forgetting: New Metrics for Continual Learning `__ by Natalia Díaz-Rodr\ǵuez, Vincenzo Lomonaco, David Filliat and Davide Maltoni. *arXiv*, 2018. [cifar] [framework] |diazrodriguez2018Catastrophic_Forgetting_Studies| +- `The Stability-Plasticity Dilemma: Investigating the Continuum from Catastrophic Forgetting to Age-Limited Learning Effects `__ by Martial Mermillod, Aurélia Bugaiska and Patrick Bonin. *Frontiers in Psychology*, 504, 2013. |mermillod2013Catastrophic_Forgetting_Studies| +- `Catastrophic Forgetting in Connectionist Networks `__ by and Robert French. *Trends in Cognitive Sciences*, 128--135, 1999. [sparsity] |french1999Catastrophic_Forgetting_Studies| +- `How Does a Brain Build a Cognitive Code? `__ by and Stephen Grossberg. *Psychological Review*, 1--51, 1980. |grossberg1980Catastrophic_Forgetting_Studies| + +Classics +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**8 papers** + +In this section you'll find pioneering and classic continual learning papers. We recommend to read all the papers in this section for a good background on current continual deep learning developments. + +- `The Organization of Behavior: A Neuropsychological Theory `__ by and D O Hebb. *Lawrence Erlbaum*, 2002. [hebbian] |hebb2002Classics| +- `Pseudo-Recurrent Connectionist Networks: An Approach to the 'Sensitivity-Stability' Dilemma `__ by and Robert French. *Connection Science*, 353--380, 1997. [dual] |french1997Classics| +- `CHILD: A First Step Towards Continual Learning `__ by and Mark B Ring. *Machine Learning*, 77--104, 1997. |ring1997Classics| +- `Is Learning The N-Th Thing Any Easier Than Learning The First? `__ by and Sebastian Thrun. *Advances in Neural Information Processing Systems 8*, 640--646, 1996. [vision] |thrun1996aClassics| +- `Learning in the Presence of Concept Drift and Hidden Contexts `__ by Gerhard Widmer and Miroslav Kubat. *Machine Learning*, 69--101, 1996. |widmer1996Classics| +- `Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks `__ by and Robert French. *In Proceedings of the 13th Annual Cognitive Science Society Conference*, 173--178, 1991. [sparsity] |french1991Classics| +- `The ART of Adaptive Pattern Recognition by a Self-Organizing Neural Network `__ by Gail A. Carpenter and Stephen Grossberg. *Computer*, 77--88, 1988. |carpenter1988Classics| +- `How Does a Brain Build a Cognitive Code? `__ by and Stephen Grossberg. *Psychological Review*, 1--51, 1980. |grossberg1980Classics| + +Continual Few Shot Learning +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**7 papers** + +Here we list the papers related to Few-Shot continual and incremental learning. + +- `Defining Benchmarks for Continual Few-Shot Learning `__ by Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal and Amos Storkey. *arXiv*, 2020. [imagenet] |antoniou2020Continual_Few_Shot_Learning| +- `Tell Me What This Is: Few-Shot Incremental Object Learning by a Robot `__ by Ali Ayub and Alan R. Wagner. *arXiv*, 2020. |ayub2020bContinual_Few_Shot_Learning| +- `La-MAML: Look-Ahead Meta Learning for Continual Learning `__ by Gunshi Gupta, Karmesh Yadav and Liam Paull. *arXiv*, 2020. |gupta2020aContinual_Few_Shot_Learning| +- `iTAML: An Incremental Task-Agnostic Meta-Learning Approach `__ by Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan and Mubarak Shah. *IEEE/CVF Conference on Computer Vision and Pattern Recognition*, 13588---13597, 2020. [cifar] [imagenet] |rajasegaran2020Continual_Few_Shot_Learning| +- `Wandering within a World: Online Contextualized Few-Shot Learning `__ by Mengye Ren, Michael L Iuzzolino, Michael C Mozer and Richard S Zemel. *arXiv*, 2020. [omniglot] |ren2020Continual_Few_Shot_Learning| +- `Few-Shot Class-Incremental Learning `__ by X. Tao, Hong X., X. Chang, S. Dong, X. Wei and Y. Gong. *IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2020. [cifar] |tao2020Continual_Few_Shot_Learning| +- `Few-Shot Class-Incremental Learning via Feature Space Composition `__ by H. Zhao, Y. Fu, X. Li, S. Li, B. Omar and X. Li. *arXiv*, 2020. [cifar] [cubs] |zhao2020Continual_Few_Shot_Learning| + +Continual Meta Learning +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**4 papers** + +In this section we list all the papers related to the continual meta-learning. + +- `Online Fast Adaptation and Knowledge Accumulation: A New Approach to Continual Learning `__ by Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Caccia, Issam Laradji, Irina Rish, Alexande Lacoste, David Vazquez and Laurent Charlin. *arXiv*, 2020. [fashion] [framework] [mnist] |caccia2020Continual_Meta_Learning| +- `Continuous Meta-Learning without Tasks `__ by James Harrison, Apoorva Sharma, Chelsea Finn and Marco Pavone. *arXiv*, 2019. [imagenet] [mnist] |harrison2019Continual_Meta_Learning| +- `Task Agnostic Continual Learning via Meta Learning `__ by Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A Rusu, Yee Whye Teh and Razvan Pascanu. *arXiv:1906.05201 [cs, stat]*, 2019. [mnist] |he2019Continual_Meta_Learning| +- `Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks `__ by Ghassen Jerfel, Erin Grant, Tom Griffiths and Katherine A Heller. *Advances in Neural Information Processing Systems*, 9122--9133, 2019. [bayes] [vision] |jerfel2019Continual_Meta_Learning| + +Continual Reinforcement Learning +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**19 papers** + +In this section we list all the papers related to the continual Reinforcement Learning. + +- `Reducing Catastrophic Forgetting When Evolving Neural Networks `__ by and Joseph Early. *arXiv*, 2019. |early2019Continual_Reinforcement_Learning| +- `A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning `__ by Francisco M Garcia and Philip S Thomas. *NeurIPS*, 5691--5700, 2019. |garcia2019Continual_Reinforcement_Learning| +- `Policy Consolidation for Continual Reinforcement Learning `__ by Christos Kaplanis, Murray Shanahan and Claudia Clopath. *ICML*, 2019. |kaplanis2019Continual_Reinforcement_Learning| +- `Continual Learning Exploiting Structure of Fractal Reservoir Computing `__ by Taisuke Kobayashi and Toshiki Sugino. *Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions*, 35--47, 2019. [rnn] |kobayashi2019Continual_Reinforcement_Learning| +- `Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL `__ by Anusha Nagabandi, Chelsea Finn and Sergey Levine. *7th International Conference on Learning Representations, ICLR 2019*, 2019. |nagabandi2019Continual_Reinforcement_Learning| +- `Leaky Tiling Activations: A Simple Approach to Learning Sparse Representations Online `__ by Yangchen Pan, Kirby Banman and Martha White. *arXiv*, 2019. [sparsity] |pan2019Continual_Reinforcement_Learning| +- `Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference `__ by Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu and Gerald Tesauro. *ICLR*, 2019. [mnist] |riemer2019Continual_Reinforcement_Learning| +- `Experience Replay for Continual Learning `__ by David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P Lillicrap and Greg Wayne. *NeurIPS*, 350--360, 2019. |rolnick2019Continual_Reinforcement_Learning| +- `Selective Experience Replay for Lifelong Learning `__ by David Isele and Akansel Cosgun. *Thirty-Second AAAI Conference on Artificial Intelligence*, 3302--3309, 2018. |isele2018Continual_Reinforcement_Learning| +- `Continual Reinforcement Learning with Complex Synapses `__ by Christos Kaplanis, Murray Shanahan and Claudia Clopath. *ICML*, 2018. |kaplanis2018Continual_Reinforcement_Learning| +- `Unicorn: Continual Learning with a Universal, Off-Policy Agent `__ by Daniel J Mankowitz, Augustin Žídek, André Barreto, Dan Horgan, Matteo Hessel, John Quan, Junhyuk Oh, Hado van Hasselt, David Silver and Tom Schaul. *arXiv*, 1--17, 2018. |mankowitz2018Continual_Reinforcement_Learning| +- `Lifelong Inverse Reinforcement Learning `__ by Jorge A Mendez, Shashank Shivkumar and Eric Eaton. *NeurIPS*, 4502--4513, 2018. |mendez2018Continual_Reinforcement_Learning| +- `Progress & Compress: A Scalable Framework for Continual Learning `__ by Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu and Raia Hadsell. *International Conference on Machine Learning*, 4528--4537, 2018. [vision] |schwarz2018Continual_Reinforcement_Learning| +- `Overcoming Catastrophic Forgetting in Neural Networks `__ by James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran and Raia Hadsell. *PNAS*, 3521--3526, 2017. [mnist] |kirkpatrick2017Continual_Reinforcement_Learning| +- `Stable Predictive Representations with General Value Functions for Continual Learning `__ by Matthew Schlegel, Adam White and Martha White. *Continual Learning and Deep Networks Workshop at the Neural Information Processing System Conference*, 2017. |schlegel2017Continual_Reinforcement_Learning| +- `Continual Learning through Evolvable Neural Turing Machines `__ by Benno Luders, Mikkel Schlager and Sebastian Risi. *NIPS 2016 Workshop on Continual Learning and Deep Networks*, 2016. |luders2016Continual_Reinforcement_Learning| +- `Progressive Neural Networks `__ by Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu and Raia Hadsell. *arXiv*, 2016. [mnist] |rusu2016Continual_Reinforcement_Learning| +- `Lifelong-RL: Lifelong Relaxation Labeling for Separating Entities and Aspects in Opinion Targets. `__ by Lei Shu, Bing Liu, Hu Xu and Annice Kim. *Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing*, 225--235, 2016. [nlp] |shu2016Continual_Reinforcement_Learning| +- `CHILD: A First Step Towards Continual Learning `__ by and Mark B Ring. *Machine Learning*, 77--104, 1997. |ring1997Continual_Reinforcement_Learning| + +Continual Sequential Learning +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**21 papers** + +Here we maintain a list of all the papers related to the continual learning at the intersection with sequential learning. + +- `Continual Learning with Gated Incremental Memories for Sequential Data Processing `__ by Andrea Cossu, Antonio Carta and Davide Bacciu. *Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN 2020)*, 2020. [mnist] [rnn] |cossu2020Continual_Sequential_Learning| +- `Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams `__ by Matthias De Lange and Tinne Tuytelaars. *arXiv*, 2020. [cifar] [framework] [mnist] [vision] |delange2020aContinual_Sequential_Learning| +- `Organizing Recurrent Network Dynamics by Task-Computation to Enable Continual Learning `__ by Lea Duncker, Laura N Driscoll, Krishna V Shenoy, Maneesh Sahani and David Sussillo. *Advances in Neural Information Processing Systems*, 2020. [rnn] |duncker2020Continual_Sequential_Learning| +- `Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis `__ by Tyler L Hayes and Christopher Kanan. *CLVision Workshop at CVPR 2020*, 1--15, 2020. [core50] [imagenet] |hayes2020Continual_Sequential_Learning| +- `Meta-Consolidation for Continual Learning `__ by K J Joseph and Vineeth N Balasubramanian. *NeurIPS*, 2020. [bayes] [cifar] [imagenet] [mnist] |joseph2020Continual_Sequential_Learning| +- `Continual Learning with Bayesian Neural Networks for Non-Stationary Data `__ by Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt and Stephan Günnemann. *Eighth International Conference on Learning Representations*, 2020. [bayes] |kurle2020Continual_Sequential_Learning| +- `Compositional Language Continual Learning `__ by Yuanpeng Li, Liang Zhao, Kenneth Church and Mohamed Elhoseiny. *Eighth International Conference on Learning Representations*, 2020. [nlp] [rnn] |li2020bContinual_Sequential_Learning| +- `Online Continual Learning on Sequences `__ by German I Parisi and Vincenzo Lomonaco. *arXiv*, 2020. [framework] |parisi2020Continual_Sequential_Learning| +- `Gradient Based Sample Selection for Online Continual Learning `__ by Rahaf Aljundi, Min Lin, Baptiste Goujaud and Yoshua Bengio. *Advances in Neural Information Processing Systems 32*, 11816--11825, 2019. [cifar] [mnist] |aljundi2019aContinual_Sequential_Learning| +- `Online Continual Learning with Maximal Interfered Retrieval `__ by Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin and Lucas Page-Caccia. *Advances in Neural Information Processing Systems 32*, 11849--11860, 2019. [cifar] [mnist] |aljundi2019bContinual_Sequential_Learning| +- `Task-Free Continual Learning `__ by Rahaf Aljundi, Klaas Kelchtermans and Tinne Tuytelaars. *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2019. [vision] |aljundi2019dContinual_Sequential_Learning| +- `Efficient Lifelong Learning with A-GEM `__ by Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach and Mohamed Elhoseiny. *ICLR*, 2019. [cifar] [mnist] |chaudhry2019Continual_Sequential_Learning| +- `Task Agnostic Continual Learning via Meta Learning `__ by Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A Rusu, Yee Whye Teh and Razvan Pascanu. *arXiv:1906.05201 [cs, stat]*, 2019. [mnist] |he2019Continual_Sequential_Learning| +- `A Study on Catastrophic Forgetting in Deep LSTM Networks `__ by Monika Schak and Alexander Gepperth. *Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning*, 714--728, 2019. [rnn] |schak2019Continual_Sequential_Learning| +- `Unsupervised Progressive Learning and the STAM Architecture `__ by James Smith, Seth Baer, Cameron Taylor and Constantine Dovrolis. *arXiv*, 2019. [mnist] |smith2019Continual_Sequential_Learning| +- `Toward Training Recurrent Neural Networks for Lifelong Learning `__ by Shagun Sodhani, Sarath Chandar and Yoshua Bengio. *Neural Computation*, 1--35, 2019. [rnn] |sodhani2019Continual_Sequential_Learning| +- `Overcoming Catastrophic Interference Using Conceptor-Aided Backpropagation `__ by Xu He and Herbert Jaeger. *ICLR*, 2018. [mnist] |he2018Continual_Sequential_Learning| +- `Gradient Episodic Memory for Continual Learning `__ by David Lopez-Paz and Marc'Aurelio Ranzato. *NIPS*, 2017. [cifar] [mnist] |lopezpaz2017Continual_Sequential_Learning| +- `iCaRL: Incremental Classifier and Representation Learning `__ by Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl and Christoph H Lampert. *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2017. [cifar] |rebuffi2017Continual_Sequential_Learning| +- `Self-Refreshing Memory in Artificial Neural Networks: Learning Temporal Sequences without Catastrophic Forgetting `__ by Bernard Ans, Stéphane Rousset, Robert M. French and Serban Musca. *Connection Science*, 71--99, 2004. [rnn] |ans2004Continual_Sequential_Learning| +- `Using Pseudo-Recurrent Connectionist Networks to Solve the Problem of Sequential Learning `__ by and Robert French. *Proceedings of the 19th Annual Cognitive Science Society Conference*, 1997. [dual] |french1997aContinual_Sequential_Learning| + +Dissertation and Theses +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**6 papers** + +In this section we maintain a list of all the dissertation and thesis produced on continual learning and related topics. + +- `Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes `__ by and Timoth'ee Lesort. *arXiv*, 2020. [cifar] [framework] [generative] [mnist] [vision] |lesort2020aDissertation_and_Theses| +- `Continual Learning in Neural Networks `__ by and Rahaf Aljundi. *arXiv*, 2019. [cifar] [imagenet] [mnist] [vision] |aljundi2019Dissertation_and_Theses| +- `Continual Deep Learning via Progressive Learning `__ by and Haytham M. Fayek. *RMIT University*, 2019. [audio] [cifar] [imagenet] [sparsity] |fayek2019Dissertation_and_Theses| +- `Continual Learning with Deep Architectures `__ by and Vincenzo Lomonaco. *University of Bologna*, 2019. [core50] [framework] |lomonaco2019Dissertation_and_Theses| +- `Explanation-Based Neural Network Learning: A Lifelong Learning Approach `__ by and Sebastian Thrun. *Springer*, 1996. [framework] |thrun1996Dissertation_and_Theses| +- `Continual Learning in Reinforcement Environments `__ by and Mark Ring. *University of Texas*, 1994. [framework] |ring1994Dissertation_and_Theses| + +Generative Replay Methods +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**5 papers** + +In this section we collect all the papers introducing a continual learning strategy employing some generative replay methods. + +- `Brain-Inspired Replay for Continual Learning with Artificial Neural Networks `__ by Gido M. van de Ven, Hava T. Siegelmann and Andreas S. Tolias. *Nature Communications*, 2020. [cifar] [framework] [generative] [mnist] |vandeven2020Generative_Replay_Methods| +- `Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay `__ by Mohammad Rostami, Soheil Kolouri and Praveen K Pilly. *arXiv*, 2019. |rostami2019Generative_Replay_Methods| +- `Continual Learning of New Sound Classes Using Generative Replay `__ by Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis and Laurent Charlin. *arXiv*, 2019. [audio] |wang2019Generative_Replay_Methods| +- `Generative Replay with Feedback Connections as a General Strategy for Continual Learning `__ by Gido M. van de Ven and Andreas S. Tolias. *arXiv*, 2018. [framework] [generative] [mnist] |vandeven2018Generative_Replay_Methods| +- `Continual Learning with Deep Generative Replay `__ by Hanul Shin, Jung Kwon Lee, Jaehong Kim and Jiwon Kim. *Advances in Neural Information Processing Systems 30*, 2990--2999, 2017. [mnist] |shin2017Generative_Replay_Methods| + +Hybrid Methods +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**8 papers** + +In this section we collect all the papers introducing a continual learning strategy employing some hybrid methods, mixing different strategies. + +- `Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches `__ by Vincenzo Lomonaco, Davide Maltoni and Lorenzo Pellegrini. *CVPR Workshop on Continual Learning for Computer Vision*, 246--247, 2020. [core50] |lomonaco2020aHybrid_Methods| +- `Linear Mode Connectivity in Multitask and Continual Learning `__ by Seyed Iman Mirzadeh, Mehrdad Farajtabar, Dilan Gorur, Razvan Pascanu and Hassan Ghasemzadeh. *arXiv*, 2020. [cifar] [experimental] [mnist] |mirzadeh2020Hybrid_Methods| +- `Single-Net Continual Learning with Progressive Segmented Training (PST) `__ by Xiaocong Du, Gouranga Charan, Frank Liu and Yu Cao. *arXiv*, 1629--1636, 2019. [cifar] |du2019Hybrid_Methods| +- `Continuous Learning in Single-Incremental-Task Scenarios `__ by Davide Maltoni and Vincenzo Lomonaco. *Neural Networks*, 56--73, 2019. [core50] [framework] |maltoni2019Hybrid_Methods| +- `Toward Training Recurrent Neural Networks for Lifelong Learning `__ by Shagun Sodhani, Sarath Chandar and Yoshua Bengio. *Neural Computation*, 1--35, 2019. [rnn] |sodhani2019Hybrid_Methods| +- `Continual Learning of New Sound Classes Using Generative Replay `__ by Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis and Laurent Charlin. *arXiv*, 2019. [audio] |wang2019Hybrid_Methods| +- `Lifelong Learning via Progressive Distillation and Retrospection `__ by Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang and Dahua Lin. *ECCV*, 2018. [imagenet] [vision] |hou2018Hybrid_Methods| +- `Progress & Compress: A Scalable Framework for Continual Learning `__ by Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu and Raia Hadsell. *International Conference on Machine Learning*, 4528--4537, 2018. [vision] |schwarz2018Hybrid_Methods| + +Meta Continual Learning +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**8 papers** + +In this section we list all the papers related to the meta-continual learning. + +- `Learning to Continually Learn `__ by Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O. Stanley, Jeff Clune and Nick Cheney. *ECAI*, 2020. [vision] |beaulieu2020Meta_Continual_Learning| +- `Continual Learning with Deep Artificial Neurons `__ by Blake Camp, Jaya Krishna Mandivarapu and Rolando Estrada. *arXiv*, 2020. [experimental] |camp2020Meta_Continual_Learning| +- `Meta-Consolidation for Continual Learning `__ by K J Joseph and Vineeth N Balasubramanian. *NeurIPS*, 2020. [bayes] [cifar] [imagenet] [mnist] |joseph2020Meta_Continual_Learning| +- `Meta Continual Learning via Dynamic Programming `__ by R Krishnan and Prasanna Balaprakash. *arXiv*, 2020. [omniglot] |krishnan2020Meta_Continual_Learning| +- `Online Meta-Learning `__ by Chelsea Finn, Aravind Rajeswaran, Sham Kakade and Sergey Levine. *ICML*, 2019. [experimental] [mnist] |finn2019Meta_Continual_Learning| +- `Meta-Learning Representations for Continual Learning `__ by Khurram Javed and Martha White. *NeurIPS*, 2019. [omniglot] |javed2019Meta_Continual_Learning| +- `Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference `__ by Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu and Gerald Tesauro. *ICLR*, 2019. [mnist] |riemer2019Meta_Continual_Learning| +- `Meta Continual Learning `__ by Risto Vuorio, Dong-Yeon Cho, Daejoong Kim and Jiwon Kim. *arXiv*, 2018. [mnist] |vuorio2018Meta_Continual_Learning| + +Metrics and Evaluations +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**6 papers** + +In this section we list all the papers related to the continual learning evalution protocols and metrics. + +- `Online Fast Adaptation and Knowledge Accumulation: A New Approach to Continual Learning `__ by Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Caccia, Issam Laradji, Irina Rish, Alexande Lacoste, David Vazquez and Laurent Charlin. *arXiv*, 2020. [fashion] [framework] [mnist] |caccia2020Metrics_and_Evaluations| +- `Optimal Continual Learning Has Perfect Memory and Is NP-HARD `__ by Jeremias Knoblauch, Hisham Husain and Tom Diethe. *ICML*, 2020. [theoretical] |knoblauch2020Metrics_and_Evaluations| +- `Regularization Shortcomings for Continual Learning `__ by Timothée Lesort, Andrei Stoian and David Filliat. *arXiv*, 2020. [fashion] [mnist] |lesort2020bMetrics_and_Evaluations| +- `Strategies for Improving Single-Head Continual Learning Performance `__ by Alaa El Khatib and Fakhri Karray. *Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*, 452--460, 2019. [cifar] [mnist] |elkhatib2019Metrics_and_Evaluations| +- `Towards Robust Evaluations of Continual Learning `__ by Sebastian Farquhar and Yarin Gal. *Privacy in Machine Learning and Artificial Intelligence Workshop, ICML*, 2019. [fashion] [framework] |farquhar2019Metrics_and_Evaluations| +- `Three Scenarios for Continual Learning `__ by Gido M van de Ven and Andreas S Tolias. *Continual Learning Workshop NeurIPS*, 2018. [framework] [mnist] |vandeven2018aMetrics_and_Evaluations| + +Neuroscience +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**5 papers** + +In this section we maintain a list of all Neuroscience papers that can be related (and useful) for continual machine learning. + +- `Can Sleep Protect Memories from Catastrophic Forgetting? `__ by Oscar C Gonzalez, Yury Sokolov, Giri Krishnan and Maxim Bazhenov. *bioRxiv*, 569038, 2019. |gonzalez2019Neuroscience| +- `Synaptic Consolidation: An Approach to Long-Term Learning `__ by and Claudia Clopath. *Cognitive Neurodynamics*, 251--257, 2012. [hebbian] |clopath2012Neuroscience| +- `The Organization of Behavior: A Neuropsychological Theory `__ by and D O Hebb. *Lawrence Erlbaum*, 2002. [hebbian] |hebb2002Neuroscience| +- `Negative Transfer Errors in Sequential Cognitive Skills: Strong-but-Wrong Sequence Application. `__ by Dan J. Woltz, Michael K. Gardner and Brian G. Bell. *Journal of Experimental Psychology: Learning, Memory, and Cognition*, 601--625, 2000. |woltz2000Neuroscience| +- `Connectionist Models of Recognition Memory: Constraints Imposed by Learning and Forgetting Functions. `__ by and R Ratcliff. *Psychological review*, 285--308, 1990. |ratcliff1990Neuroscience| + +Others +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**30 papers** + +In this section we list all the other papers not appearing in at least one of the above sections. + +- `Continuum: Simple Management of Complex Continual Learning Scenarios `__ by Arthur Douillard and Timothée Lesort. *arXiv*, 2021. |douillard2021Others| +- `Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning `__ by Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang and Zhangyang Wang. *International Conference on Learning Representations*, 2020. |chen2020Others| +- `Continual Learning Using Task Conditional Neural Networks `__ by Honglin Li, Payam Barnaghi, Shirin Enshaeifar and Frieder Ganz. *arXiv*, 2020. [cifar] [mnist] |li2020Others| +- `Energy-Based Models for Continual Learning `__ by Shuang Li, Yilun Du, Gido M. van de Ven, Antonio Torralba and Igor Mordatch. *arXiv*, 2020. [cifar] [experimental] [mnist] |li2020aOthers| +- `Continual Universal Object Detection `__ by Xialei Liu, Hao Yang, Avinash Ravichandran, Rahul Bhotika and Stefano Soatto. *arXiv*, 2020. |liu2020Others| +- `Mnemonics Training: Multi-Class Incremental Learning without Forgetting `__ by Yaoyao Liu, An-An Liu, Yuting Su, Bernt Schiele and Qianru Sun. *arXiv*, 2020. [cifar] [imagenet] |liu2020aOthers| +- `Structured Compression and Sharing of Representational Space for Continual Learning `__ by Gobinda Saha, Isha Garg, Aayush Ankit and Kaushik Roy. *arXiv*, 2020. [cifar] [mnist] |saha2020Others| +- `Gradient Projection Memory for Continual Learning `__ by Gobinda Saha and Kaushik Roy. *International Conference on Learning Representations*, 2020. |saha2020aOthers| +- `Lifelong Graph Learning `__ by Chen Wang, Yuheng Qiu and Sebastian Scherer. *arXiv*, 2020. [graph] |wang2020Others| +- `Superposition of Many Models into One `__ by Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal and Bruno Olshausen. *arXiv*, 2019. [cifar] [mnist] |cheung2019Others| +- `Continual Learning in Practice `__ by Tom Diethe, Tom Borchert, Eno Thereska, Borja Balle and Neil Lawrence. *arXiv*, 2019. |diethe2019Others| +- `Dynamically Constraining Connectionist Networks to Produce Distributed, Orthogonal Representations to Reduce Catastrophic Interference `__ by and Robert French. *Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society*, 335--340, 2019. |french2019Others| +- `Continual Learning via Neural Pruning `__ by Siavash Golkar, Michael Kagan and Kyunghyun Cho. *arXiv*, 2019. [cifar] [mnist] [sparsity] |golkar2019Others| +- `BooVAE: A Scalable Framework for Continual VAE Learning under Boosting Approach `__ by Anna Kuzina, Evgenii Egorov and Evgeny Burnaev. *arXiv*, 2019. [bayes] [fashion] [mnist] |kuzina2019Others| +- `Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild `__ by Kibok Lee, Kimin Lee, Jinwoo Shin and Honglak Lee. *Proceedings of the IEEE International Conference on Computer Vision*, 312--321, 2019. |lee2019Others| +- `Continual Learning Using Bayesian Neural Networks `__ by HongLin Li, Payam Barnaghi, Shirin Enshaeifar and Frieder Ganz. *arXiv*, 2019. [bayes] [mnist] |li2019Others| +- `Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition `__ by Martin Mundt, Sagnik Majumder, Iuliia Pliushch, Yong Won Hong and Visvanathan Ramesh. *arXiv*, 2019. [audio] [bayes] [fashion] [framework] [generative] [mnist] [vision] |mundt2019Others| +- `Continual Rare-Class Recognition with Emerging Novel Subclasses `__ by Hung Nguyen, Xuejian Wang and Leman Akoglu. *ECML*, 2019. [nlp] |nguyen2019Others| +- `Random Path Selection for Incremental Learning `__ by Jathushan Rajasegaran, Munawar Hayat, Salman Khan Fahad, Shahbaz Khan and Ling Shao. *NeurIPS*, 12669--12679, 2019. [cifar] [imagenet] [mnist] |rajasegaran2019Others| +- `Improving and Understanding Variational Continual Learning `__ by Siddharth Swaroop, Cuong V Nguyen, Thang D Bui and Richard E Turner. *Continual Learning Workshop NeurIPS*, 1--17, 2019. [bayes] [mnist] |swaroop2019Others| +- `Continual Learning via Online Leverage Score Sampling `__ by Dan Teng and Sakyasingha Dasgupta. *arXiv*, 2019. [cifar] [mnist] |teng2019Others| +- `Class-Incremental Learning Based on Feature Extraction of CNN With Optimized Softmax and One-Class Classifiers `__ by Xin Ye and Qiuyu Zhu. *IEEE Access*, 42024--42031, 2019. [cifar] [mnist] |ye2019Others| +- `Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies `__ by Alessandro Achille, Tom Eccles, Loic Matthey, Christopher P. Burgess, Nick Watters, Alexander Lerchner and Irina Higgins. *Neural Information Processing Systems (NeurIPS)*, 2018. |achille2018Others| +- `A Unifying Bayesian View of Continual Learning `__ by Sebastian Farquhar and Yarin Gal. *NeurIPS Bayesian Deep Learning Workshop*, 2018. [bayes] [cifar] [mnist] |farquhar2018Others| +- `Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights `__ by Arun Mallya, Dillon Davis and Svetlana Lazebnik. *ECCV*, 72--88, 2018. [imagenet] |mallya2018Others| +- `Adding New Tasks to a Single Network with Weight Transformations Using Binary Masks `__ by Massimiliano Mancini, Elisa Ricci, Barbara Caputo and Samuel Rota Bulò. *Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*, 180--189, 2018. [sparsity] [vision] |mancini2018Others| +- `Variational Continual Learning `__ by Cuong V Nguyen, Yingzhen Li, Thang D Bui and Richard E Turner. *ICLR*, 2018. [bayes] |nguyen2018Others| +- `Task Agnostic Continual Learning Using Online Variational Bayes `__ by Chen Zeno, Itay Golan, Elad Hoffer and Daniel Soudry. *NeurIPS Bayesian Deep Learning Workshop*, 2018. [bayes] [cifar] [mnist] |zeno2018Others| +- `Encoder Based Lifelong Learning `__ by Amal Rannen Triki, Rahaf Aljundi, Mathew B. Blaschko and Tinne Tuytelaars. *Proceedings of the IEEE International Conference on Computer Vision*, 1329--1337, 2017. [imagenet] [vision] |triki2017Others| +- `Fine-Tuning Deep Neural Networks in Continuous Learning Scenarios `__ by Christoph Käding, Erik Rodner, Alexander Freytag and Joachim Denzler. *ACCV Workshop*, 2016. [imagenet] |kading2016Others| + +Regularization Methods +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**24 papers** + +In this section we collect all the papers introducing a continual learning strategy employing some regularization methods. + +- `Modeling the Background for Incremental Learning in Semantic Segmentation `__ by Fabio Cermelli, Massimiliano Mancini, Samuel Rota Bulò, Elisa Ricci and Barbara Caputo. *CVPR*, 9233--9242, 2020. |cermelli2020Regularization_Methods| +- `PLOP: Learning without Forgetting for Continual Semantic Segmentation `__ by Arthur Douillard, Yifu Chen, Arnaud Dapogny and Matthieu Cord. *arXiv*, 2020. |douillard2020Regularization_Methods| +- `Insights from the Future for Continual Learning `__ by Arthur Douillard, Eduardo Valle, Charles Ollion, Thomas Robert and Matthieu Cord. *arXiv*, 2020. |douillard2020aRegularization_Methods| +- `PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning `__ by Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert and Eduardo Valle. *European Conference on Computer Vision (ECCV)*, 2020. |douillard2020bRegularization_Methods| +- `Uncertainty-Guided Continual Learning with Bayesian Neural Networks `__ by Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell and Marcus Rohrbach. *ICLR*, 2020. [bayes] [cifar] [fashion] [mnist] |ebrahimi2020Regularization_Methods| +- `Continual Learning of Object Instances `__ by Kishan Parshotam and Mert Kilickaya. *CVPR 2020: Workshop on Continual Learning in Computer Vision*, 2020. [vision] |parshotam2020Regularization_Methods| +- `Efficient Continual Learning in Neural Networks with Embedding Regularization `__ by Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco and Aurelio Uncini. *Neurocomputing*, 2020. [cifar] [mnist] |pomponi2020Regularization_Methods| +- `Continual Learning with Hypernetworks `__ by Johannes von Oswald, Christian Henning, João Sacramento and Benjamin F Grewe. *International Conference on Learning Representations*, 2020. [cifar] [mnist] |vonoswald2020Regularization_Methods| +- `Uncertainty-Based Continual Learning with Adaptive Regularization `__ by Hongjoon Ahn, Sungmin Cha, Donggyu Lee and Taesup Moon. *NeurIPS*, 4392--4402, 2019. [bayes] [cifar] [mnist] |ahn2019Regularization_Methods| +- `Learning without Memorizing `__ by Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu and Rama Chellappa. *CVPR*, 2019. [cifar] |dhar2019Regularization_Methods| +- `Incremental Learning Techniques for Semantic Segmentation `__ by Umberto Michieli and Pietro Zanuttigh. *Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019*, 3205--3212, 2019. |michieli2019Regularization_Methods| +- `Functional Regularisation for Continual Learning Using Gaussian Processes `__ by Michalis K Titsias, Jonathan Schwarz, Alexander G de G Matthews, Razvan Pascanu and Yee Whye Teh. *arXiv*, 2019. [mnist] [omniglot] |titsias2019Regularization_Methods| +- `Memory Aware Synapses: Learning What (Not) to Forget `__ by Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach and Tinne Tuytelaars. *The European Conference on Computer Vision (ECCV)*, 2018. [vision] |aljundi2018Regularization_Methods| +- `Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence `__ by Arslan Chaudhry, Puneet K. Dokania, Thalaiyasingam Ajanthan and Philip H. S. Torr. *Proceedings of the European Conference on Computer Vision (ECCV)*, 532--547, 2018. |chaudhry2018Regularization_Methods| +- `Rotate Your Networks: Better Weight Consolidation and Less Catastrophic Forgetting `__ by Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M. Lopez and Andrew D Bagdanov. *2018 24th International Conference on Pattern Recognition (ICPR)*, 2262--2268, 2018. [cifar] [mnist] |liu2018Regularization_Methods| +- `Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting `__ by Hippolyt Ritter, Aleksandar Botev and David Barber. *arXiv*, 2018. [bayes] [mnist] |ritter2018Regularization_Methods| +- `Overcoming Catastrophic Forgetting with Hard Attention to the Task `__ by Joan Serrà, D\d́ac Surís, Marius Miron and Alexandros Karatzoglou. *ICML*, 2018. [cifar] [fashion] [mnist] |serra2018Regularization_Methods| +- `Overcoming Catastrophic Forgetting in Neural Networks `__ by James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran and Raia Hadsell. *PNAS*, 3521--3526, 2017. [mnist] |kirkpatrick2017Regularization_Methods| +- `Overcoming Catastrophic Forgetting by Incremental Moment Matching `__ by Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha and Byoung-Tak Zhang. *Advances in Neural Information Processing Systems*, 4653--4663, 2017. [bayes] [cifar] [mnist] |lee2017Regularization_Methods| +- `Lifelong Generative Modeling `__ by Jason Ramapuram, Magda Gregorova and Alexandros Kalousis. *arXiv*, 1--14, 2017. [fashion] [generative] [mnist] |ramapuram2017Regularization_Methods| +- `Continual Learning in Generative Adversarial Nets `__ by Ari Seff, Alex Beatson, Daniel Suo and Han Liu. *arXiv*, 1--9, 2017. [mnist] |seff2017Regularization_Methods| +- `Incremental Learning of Object Detectors without Catastrophic Forgetting `__ by Konstantin Shmelkov, Cordelia Schmid and Karteek Alahari. *Proceedings of the IEEE International Conference on Computer Vision*, 3420--3429, 2017. |shmelkov2017Regularization_Methods| +- `Continual Learning Through Synaptic Intelligence `__ by Friedemann Zenke, Ben Poole and Surya Ganguli. *International Conference on Machine Learning*, 3987--3995, 2017. [cifar] [mnist] |zenke2017Regularization_Methods| +- `Learning without Forgetting `__ by Zhizhong Li and Derek Hoiem. *European Conference on Computer Vision*, 614--629, 2016. [imagenet] |li2016Regularization_Methods| + +Rehearsal Methods +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**14 papers** + +In this section we collect all the papers introducing a continual learning strategy employing some rehearsal methods. + +- `CALM: Continuous Adaptive Learning for Language Modeling `__ by Kristjan Arumae and Parminder Bhatia. *arXiv*, 2020. [nlp] |arumae2020Rehearsal_Methods| +- `CLOPS: Continual Learning of Physiological Signals `__ by Dani Kiyasseh, Tingting Zhu and David A Clifton. *arXiv*, 2020. |kiyasseh2020Rehearsal_Methods| +- `Continual Learning with Bayesian Neural Networks for Non-Stationary Data `__ by Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt and Stephan Günnemann. *Eighth International Conference on Learning Representations*, 2020. [bayes] |kurle2020Rehearsal_Methods| +- `Graph-Based Continual Learning `__ by Binh Tang and David S. Matteson. *International Conference on Learning Representations*, 2020. |tang2020Rehearsal_Methods| +- `Brain-Inspired Replay for Continual Learning with Artificial Neural Networks `__ by Gido M. van de Ven, Hava T. Siegelmann and Andreas S. Tolias. *Nature Communications*, 2020. [cifar] [framework] [generative] [mnist] |vandeven2020Rehearsal_Methods| +- `Continual Learning with Hypernetworks `__ by Johannes von Oswald, Christian Henning, João Sacramento and Benjamin F Grewe. *International Conference on Learning Representations*, 2020. [cifar] [mnist] |vonoswald2020Rehearsal_Methods| +- `Online Continual Learning with Maximal Interfered Retrieval `__ by Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin and Lucas Page-Caccia. *Advances in Neural Information Processing Systems 32*, 11849--11860, 2019. [cifar] [mnist] |aljundi2019bRehearsal_Methods| +- `On Tiny Episodic Memories in Continual Learning `__ by Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip H S Torr and Marc'Aurelio Ranzato. *arXiv*, 2019. [cifar] [imagenet] [mnist] |chaudhry2019aRehearsal_Methods| +- `Facilitating Bayesian Continual Learning by Natural Gradients and Stein Gradients `__ by Yu Chen, Tom Diethe and Neil Lawrence. *arXiv*, 2019. [bayes] |chen2019Rehearsal_Methods| +- `Experience Replay for Continual Learning `__ by David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P Lillicrap and Greg Wayne. *NeurIPS*, 350--360, 2019. |rolnick2019Rehearsal_Methods| +- `Prototype Reminding for Continual Learning `__ by Mengmi Zhang, Tao Wang, Joo Hwee Lim and Jiashi Feng. *arXiv*, 1--10, 2019. [bayes] [cifar] [imagenet] [mnist] |zhang2019Rehearsal_Methods| +- `Memory Efficient Experience Replay for Streaming Learning `__ by Tyler L Hayes, Nathan D Cahill and Christopher Kanan. *IEEE International Conference on Robotics and Automation (ICRA)*, 2018. [core50] |hayes2018Rehearsal_Methods| +- `Selective Experience Replay for Lifelong Learning `__ by David Isele and Akansel Cosgun. *Thirty-Second AAAI Conference on Artificial Intelligence*, 3302--3309, 2018. |isele2018Rehearsal_Methods| +- `Preventing Catastrophic Interference in MultipleSequence Learning Using Coupled Reverberating Elman Networks `__ by Bernard Ans, Stephane Rousset, Robert M. French and Serban C. Musca. *Proceedings of the 24th Annual Conference of the Cognitive Science Society*, 2002. [rnn] |ans2002Rehearsal_Methods| + +Review Papers and Books +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**18 papers** + +In this section we collect all the main review papers and books on continual learning and related subjects. These may constitute a solid starting point for continual learning newcomers. + +- `Continual Lifelong Learning in Natural Language Processing: A Survey `__ by Magdalena Biesialska, Katarzyna Biesialska and Marta R. Costa-jussà. *Proceedings of the 28th International Conference on Computational Linguistics*, 6523--6541, 2020. [nlp] |biesialska2020Review_Papers_and_Books| +- `Embracing Change: Continual Learning in Deep Neural Networks `__ by Raia Hadsell, Dushyant Rao, Andrei A Rusu and Razvan Pascanu. *Trends in Cognitive Sciences*, 2020. |hadsell2020Review_Papers_and_Books| +- `Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges `__ by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodr\ǵuez. *Information Fusion*, 52--68, 2020. [framework] |lesort2020Review_Papers_and_Books| +- `A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning `__ by Martin Mundt, Yong Won Hong, Iuliia Pliushch and Visvanathan Ramesh. *arXiv*, 32, 2020. [bayes] [framework] |mundt2020Review_Papers_and_Books| +- `A Review of Off-Line Mode Dataset Shifts `__ by Carla C. Takahashi and Antonio P. Braga. *IEEE Computational Intelligence Magazine*, 16--27, 2020. |takahashi2020Review_Papers_and_Books| +- `Continual Learning with Neural Networks: A Review `__ by Abhijeet Awasthi and Sunita Sarawagi. *Proceedings of the ACM India Joint International Conference on Data Science and Management of Data*, 362--365, 2019. |awasthi2019Review_Papers_and_Books| +- `A Continual Learning Survey: Defying Forgetting in Classification Tasks `__ by Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh and Tinne Tuytelaars. *arXiv*, 2019. [framework] |delange2019Review_Papers_and_Books| +- `Continual Lifelong Learning with Neural Networks: A Review `__ by German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan and Stefan Wermter. *Neural Networks*, 54--71, 2019. [framework] |parisi2019Review_Papers_and_Books| +- `Lifelong Machine Learning, Second Edition `__ by Zhiyuan Chen and Bing Liu. *Synthesis Lectures on Artificial Intelligence and Machine Learning*, 2018. |chen2018Review_Papers_and_Books| +- `Measuring Catastrophic Forgetting in Neural Networks `__ by Ronald Kemker, Marc McClure, Angelina Abitino, Tyler L Hayes and Christopher Kanan. *Thirty-Second AAAI Conference on Artificial Intelligence*, 2018. [mnist] |kemker2018aReview_Papers_and_Books| +- `Generative Models from the Perspective of Continual Learning `__ by Timothée Lesort, Hugo Caselles-Dupré, Michael Garcia-Ortiz, Andrei Stoian and David Filliat. *Proceedings of the International Joint Conference on Neural Networks*, 2018. [cifar] [generative] [mnist] |lesort2018Review_Papers_and_Books| +- `Incremental On-Line Learning: A Review and Comparison of State of the Art Algorithms `__ by Viktor Losing, Barbara Hammer and Heiko Wersing. *Neurocomputing*, 1261--1274, 2018. |losing2018Review_Papers_and_Books| +- `A Comprehensive, Application-Oriented Study of Catastrophic Forgetting in DNNs `__ by B Pfülb and A Gepperth. *ICLR*, 2018. [fashion] [mnist] |pfulb2018Review_Papers_and_Books| +- `Avoiding Catastrophic Forgetting `__ by and Michael E. Hasselmo. *Trends in Cognitive Sciences*, 407--408, 2017. |hasselmo2017Review_Papers_and_Books| +- `Lifelong Machine Learning: A Paradigm for Continuous Learning `__ by and Bing Liu. *Frontiers of Computer Science*, 359--361, 2017. |liu2017Review_Papers_and_Books| +- `Learning in Nonstationary Environments: A Survey `__ by Gregory Ditzler, Manuel Roveri, Cesare Alippi and Robi Polikar. *IEEE Computational Intelligence Magazine*, 12--25, 2015. |ditzler2015Review_Papers_and_Books| +- `Never-Ending Learning `__ by Tom Mitchell, William W Cohen, E Hruschka, Partha P Talukdar, B Yang, Justin Betteridge, Andrew Carlson, B Dalvi, Matt Gardner, Bryan Kisiel, J Krishnamurthy, Ni Lao, K Mazaitis, T Mohamed, N Nakashole, E Platanios, A Ritter, M Samadi, B Settles, R Wang, D Wijaya, A Gupta, X Chen, A Saparov, M Greaves and J Welling. *Communications of the Acm*, 2302--2310, 2015. |mitchell2015Review_Papers_and_Books| +- `Catastrophic Forgetting; Catastrophic Interference; Stability; Plasticity; Rehearsal. `__ by and Anthony Robins. *Connection Science*, 123--146, 1995. [dual] |robins1995Review_Papers_and_Books| + +Robotics +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**5 papers** + +In this section we maintain a list of all Robotics papers that can be related to continual learning. + +- `Tell Me What This Is: Few-Shot Incremental Object Learning by a Robot `__ by Ali Ayub and Alan R. Wagner. *arXiv*, 2020. |ayub2020bRobotics| +- `Online Object and Task Learning via Human Robot Interaction `__ by M. Dehghan, Z. Zhang, M. Siam, J. Jin, L. Petrich and M. Jagersand. *2019 International Conference on Robotics and Automation (ICRA)*, 2019. |dehghan2019Robotics| +- `Towards Lifelong Self-Supervision: A Deep Learning Direction for Robotics `__ by and Jay M Wong. *arXiv*, 2016. |wong2016Robotics| +- `A Lifelong Learning Perspective for Mobile Robot Control `__ by and Sebastian Thrun. *Intelligent Robots and Systems*, 201--214, 1995. |thrun1995Robotics| +- `Explanation-Based Neural Network Learning for Robot Control `__ by Tom M Mitchell and Sebastian B Thrun. *Advances in Neural Information Processing Systems 5*, 1993. |mitchell1993Robotics| + + + + + + + + + + + + + +|barrault2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|campo2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|delange2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|fu2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|gupta2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|kiyasseh2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|lee2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|lenga2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|madasu2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|ozgun2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|sun2020Applications| + +
+ + + + +

+

+

+

+ + + + + +|bapna2019Applications| + +
+ + + + +

+

+

+

+ + + + + +|dautume2019Applications| + +
+ + + + +

+

+

+

+ + + + + +|hawkins2019Applications| + +
+ + + + +

+

+

+

+ + + + + +|liu2019Applications| + +
+ + + + +

+

+

+

+ + + + + +|mazumder2019Applications| + +
+ + + + +

+

+

+

+ + + + + +|philps2019Applications| + +
+ + + + +

+

+

+

+ + + + + +|thompson2019aApplications| + +
+ + + + +

+

+

+

+ + + + + +|zhai2019Applications| + +
+ + + + +

+

+

+

+ + + + + +|baweja2018Applications| + +
+ + + + +

+

+

+

+ + + + + +|lee2018Applications| + +
+ + + + +

+

+

+

+ + + + + +|kapoor2009Applications| + +
+ + + + +

+

+

+

+ + + + + +|adel2020Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|cossu2020Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ehret2020Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|mehta2020Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|sokar2020Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|veniat2020Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|asghar2019Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ashfahani2019Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|hung2019Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|li2019aArchitectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|madrid2019Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|rao2019Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|shen2019Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|srivastava2019Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|zhu2019Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|gidaris2018Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|valkov2018Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|xu2018Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|yoon2018Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|aljundi2017Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|draelos2017Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|chen2016Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|luders2016Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|rusu2016Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|terekhov2015Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|marsland2002Architectural_Methods| + +
+ + + + +

+

+

+

+ + + + + +|antoniou2020Benchmarks| + +
+ + + + +

+

+

+

+ + + + + +|kruszewski2020aBenchmarks| + +
+ + + + +

+

+

+

+ + + + + +|lomonaco2020Benchmarks| + +
+ + + + +

+

+

+

+ + + + + +|she2019Benchmarks| + +
+ + + + +

+

+

+

+ + + + + +|lomonaco2017Benchmarks| + +
+ + + + +

+

+

+

+ + + + + +|allred2020Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ayub2020Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ayub2020aBioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ororbia2020Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|vandeven2020aBioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|aljundi2019cBioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|miconi2019Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ororbia2019Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ororbia2019aBioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|kemker2018Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|miconi2018Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|parisi2018Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|shrestha2018Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|garg2017Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|velez2017Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ahmad2016Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|cui2016Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|miconi2016Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|coop2013Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|srivastava2013Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|coop2012Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|abbott2000Bioinspired_Methods| + +
+ + + + +

+

+

+

+ + + + + +|davidson2020Catastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|nguyen2020Catastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|nguyen2019aCatastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|toneva2019Catastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|wiewel2019Catastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|diazrodriguez2018Catastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|mermillod2013Catastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|french1999Catastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|grossberg1980Catastrophic_Forgetting_Studies| + +
+ + + + +

+

+

+

+ + + + + +|hebb2002Classics| + +
+ + + + +

+

+

+

+ + + + + +|french1997Classics| + +
+ + + + +

+

+

+

+ + + + + +|ring1997Classics| + +
+ + + + +

+

+

+

+ + + + + +|thrun1996aClassics| + +
+ + + + +

+

+

+

+ + + + + +|widmer1996Classics| + +
+ + + + +

+

+

+

+ + + + + +|french1991Classics| + +
+ + + + +

+

+

+

+ + + + + +|carpenter1988Classics| + +
+ + + + +

+

+

+

+ + + + + +|grossberg1980Classics| + +
+ + + + +

+

+

+

+ + + + + +|antoniou2020Continual_Few_Shot_Learning| + +
+ + + + +

+

+

+

+ + + + + +|ayub2020bContinual_Few_Shot_Learning| + +
+ + + + +

+

+

+

+ + + + + +|gupta2020aContinual_Few_Shot_Learning| + +
+ + + + +

+

+

+

+ + + + + +|rajasegaran2020Continual_Few_Shot_Learning| + +
+ + + + +

+

+

+

+ + + + + +|ren2020Continual_Few_Shot_Learning| + +
+ + + + +

+

+

+

+ + + + + +|tao2020Continual_Few_Shot_Learning| + +
+ + + + +

+

+

+

+ + + + + +|zhao2020Continual_Few_Shot_Learning| + +
+ + + + +

+

+

+

+ + + + + +|caccia2020Continual_Meta_Learning| + +
+ + + + +

+

+

+

+ + + + + +|harrison2019Continual_Meta_Learning| + +
+ + + + +

+

+

+

+ + + + + +|he2019Continual_Meta_Learning| + +
+ + + + +

+

+

+

+ + + + + +|jerfel2019Continual_Meta_Learning| + +
+ + + + +

+

+

+

+ + + + + +|early2019Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|garcia2019Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|kaplanis2019Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|kobayashi2019Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|nagabandi2019Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|pan2019Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|riemer2019Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|rolnick2019Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|isele2018Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|kaplanis2018Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|mankowitz2018Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|mendez2018Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|schwarz2018Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|kirkpatrick2017Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|schlegel2017Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|luders2016Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|rusu2016Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|shu2016Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|ring1997Continual_Reinforcement_Learning| + +
+ + + + +

+

+

+

+ + + + + +|cossu2020Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|delange2020aContinual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|duncker2020Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|hayes2020Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|joseph2020Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|kurle2020Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|li2020bContinual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|parisi2020Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|aljundi2019aContinual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|aljundi2019bContinual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|aljundi2019dContinual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|chaudhry2019Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|he2019Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|schak2019Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|smith2019Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|sodhani2019Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|he2018Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|lopezpaz2017Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|rebuffi2017Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|ans2004Continual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|french1997aContinual_Sequential_Learning| + +
+ + + + +

+

+

+

+ + + + + +|lesort2020aDissertation_and_Theses| + +
+ + + + +

+

+

+

+ + + + + +|aljundi2019Dissertation_and_Theses| + +
+ + + + +

+

+

+

+ + + + + +|fayek2019Dissertation_and_Theses| + +
+ + + + +

+

+

+

+ + + + + +|lomonaco2019Dissertation_and_Theses| + +
+ + + + +

+

+

+

+ + + + + +|thrun1996Dissertation_and_Theses| + +
+ + + + +

+

+

+

+ + + + + +|ring1994Dissertation_and_Theses| + +
+ + + + +

+

+

+

+ + + + + +|vandeven2020Generative_Replay_Methods| + +
+ + + + +

+

+

+

+ + + + + +|rostami2019Generative_Replay_Methods| + +
+ + + + +

+

+

+

+ + + + + +|wang2019Generative_Replay_Methods| + +
+ + + + +

+

+

+

+ + + + + +|vandeven2018Generative_Replay_Methods| + +
+ + + + +

+

+

+

+ + + + + +|shin2017Generative_Replay_Methods| + +
+ + + + +

+

+

+

+ + + + + +|lomonaco2020aHybrid_Methods| + +
+ + + + +

+

+

+

+ + + + + +|mirzadeh2020Hybrid_Methods| + +
+ + + + +

+

+

+

+ + + + + +|du2019Hybrid_Methods| + +
+ + + + +

+

+

+

+ + + + + +|maltoni2019Hybrid_Methods| + +
+ + + + +

+

+

+

+ + + + + +|sodhani2019Hybrid_Methods| + +
+ + + + +

+

+

+

+ + + + + +|wang2019Hybrid_Methods| + +
+ + + + +

+

+

+

+ + + + + +|hou2018Hybrid_Methods| + +
+ + + + +

+

+

+

+ + + + + +|schwarz2018Hybrid_Methods| + +
+ + + + +

+

+

+

+ + + + + +|beaulieu2020Meta_Continual_Learning| + +
+ + + + +

+

+

+

+ + + + + +|camp2020Meta_Continual_Learning| + +
+ + + + +

+

+

+

+ + + + + +|joseph2020Meta_Continual_Learning| + +
+ + + + +

+

+

+

+ + + + + +|krishnan2020Meta_Continual_Learning| + +
+ + + + +

+

+

+

+ + + + + +|finn2019Meta_Continual_Learning| + +
+ + + + +

+

+

+

+ + + + + +|javed2019Meta_Continual_Learning| + +
+ + + + +

+

+

+

+ + + + + +|riemer2019Meta_Continual_Learning| + +
+ + + + +

+

+

+

+ + + + + +|vuorio2018Meta_Continual_Learning| + +
+ + + + +

+

+

+

+ + + + + +|caccia2020Metrics_and_Evaluations| + +
+ + + + +

+

+

+

+ + + + + +|knoblauch2020Metrics_and_Evaluations| + +
+ + + + +

+

+

+

+ + + + + +|lesort2020bMetrics_and_Evaluations| + +
+ + + + +

+

+

+

+ + + + + +|elkhatib2019Metrics_and_Evaluations| + +
+ + + + +

+

+

+

+ + + + + +|farquhar2019Metrics_and_Evaluations| + +
+ + + + +

+

+

+

+ + + + + +|vandeven2018aMetrics_and_Evaluations| + +
+ + + + +

+

+

+

+ + + + + +|gonzalez2019Neuroscience| + +
+ + + + +

+

+

+

+ + + + + +|clopath2012Neuroscience| + +
+ + + + +

+

+

+

+ + + + + +|hebb2002Neuroscience| + +
+ + + + +

+

+

+

+ + + + + +|woltz2000Neuroscience| + +
+ + + + +

+

+

+

+ + + + + +|ratcliff1990Neuroscience| + +
+ + + + +

+

+

+

+ + + + + +|douillard2021Others| + +
+ + + + +

+

+

+

+ + + + + +|chen2020Others| + +
+ + + + +

+

+

+

+ + + + + +|li2020Others| + +
+ + + + +

+

+

+

+ + + + + +|li2020aOthers| + +
+ + + + +

+

+

+

+ + + + + +|liu2020Others| + +
+ + + + +

+

+

+

+ + + + + +|liu2020aOthers| + +
+ + + + +

+

+

+

+ + + + + +|saha2020Others| + +
+ + + + +

+

+

+

+ + + + + +|saha2020aOthers| + +
+ + + + +

+

+

+

+ + + + + +|wang2020Others| + +
+ + + + +

+

+

+

+ + + + + +|cheung2019Others| + +
+ + + + +

+

+

+

+ + + + + +|diethe2019Others| + +
+ + + + +

+

+

+

+ + + + + +|french2019Others| + +
+ + + + +

+

+

+

+ + + + + +|golkar2019Others| + +
+ + + + +

+

+

+

+ + + + + +|kuzina2019Others| + +
+ + + + +

+

+

+

+ + + + + +|lee2019Others| + +
+ + + + +

+

+

+

+ + + + + +|li2019Others| + +
+ + + + +

+

+

+

+ + + + + +|mundt2019Others| + +
+ + + + +

+

+

+

+ + + + + +|nguyen2019Others| + +
+ + + + +

+

+

+

+ + + + + +|rajasegaran2019Others| + +
+ + + + +

+

+

+

+ + + + + +|swaroop2019Others| + +
+ + + + +

+

+

+

+ + + + + +|teng2019Others| + +
+ + + + +

+

+

+

+ + + + + +|ye2019Others| + +
+ + + + +

+

+

+

+ + + + + +|achille2018Others| + +
+ + + + +

+

+

+

+ + + + + +|farquhar2018Others| + +
+ + + + +

+

+

+

+ + + + + +|mallya2018Others| + +
+ + + + +

+

+

+

+ + + + + +|mancini2018Others| + +
+ + + + +

+

+

+

+ + + + + +|nguyen2018Others| + +
+ + + + +

+

+

+

+ + + + + +|zeno2018Others| + +
+ + + + +

+

+

+

+ + + + + +|triki2017Others| + +
+ + + + +

+

+

+

+ + + + + +|kading2016Others| + +
+ + + + +

+

+

+

+ + + + + +|cermelli2020Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|douillard2020Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|douillard2020aRegularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|douillard2020bRegularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ebrahimi2020Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|parshotam2020Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|pomponi2020Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|vonoswald2020Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ahn2019Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|dhar2019Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|michieli2019Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|titsias2019Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|aljundi2018Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|chaudhry2018Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|liu2018Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ritter2018Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|serra2018Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|kirkpatrick2017Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|lee2017Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ramapuram2017Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|seff2017Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|shmelkov2017Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|zenke2017Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|li2016Regularization_Methods| + +
+ + + + +

+

+

+

+ + + + + +|arumae2020Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|kiyasseh2020Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|kurle2020Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|tang2020Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|vandeven2020Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|vonoswald2020Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|aljundi2019bRehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|chaudhry2019aRehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|chen2019Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|rolnick2019Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|zhang2019Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|hayes2018Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|isele2018Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|ans2002Rehearsal_Methods| + +
+ + + + +

+

+

+

+ + + + + +|biesialska2020Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|hadsell2020Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|lesort2020Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|mundt2020Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|takahashi2020Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|awasthi2019Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|delange2019Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|parisi2019Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|chen2018Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|kemker2018aReview_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|lesort2018Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|losing2018Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|pfulb2018Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|hasselmo2017Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|liu2017Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|ditzler2015Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|mitchell2015Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|robins1995Review_Papers_and_Books| + +
+ + + + +

+

+

+

+ + + + + +|ayub2020bRobotics| + +
+ + + + +

+

+

+

+ + + + + +|dehghan2019Robotics| + +
+ + + + +

+

+

+

+ + + + + +|wong2016Robotics| + +
+ + + + +

+

+

+

+ + + + + +|thrun1995Robotics| + +
+ + + + +

+

+

+

+ + + + + +|mitchell1993Robotics| + +
+ + + + +

+

+

+

+ + + +