Lifelong Machine Learning
The intersection of ML models, learning Mechanism and scientific Application
Many scientific applications generate large quantities of data, and machine learning models trained on such data demand considerable resources. However, as is the nature of scientific applications, novel experimental conditions ensure a steady stream of data with novel features—each novel experimental condition introduces a shift in data distribution. Retraining a model for each shift is expensive. Therefore, lifelong learning is an attractive alternative and requires the study of all facets of ML.
What are the necessary and sufficient conditions for a sequence of tasks to be continually learned~(learnability)?
What amount of data is sufficient for lifelong learning~(data sufficiency)?
What is the expected level of accuracy that can be guaranteed?
What is the robustness of these algorithms
Can we guarantee convergence when performing one task and, more important, when performing a series of tasks?
One of the more attractive components of lifelong learning would be the scenario when successive tasks provide different kinds of feedback. This will be the most general form of learning where a fully autonomous framework that may take different feedback from the system under consideration and understand this behavior is a holistic way. This will be the most ambitious goal of my research where the type of data under consideration and the type of feedback the system provides is immaterial to the learning framework. For example,
``How does the optimization problem change when two successive tasks have different constructions of costs?”
My aim is to eventually develop a comprehensive methodology to perform lifelong learning in a generic setting.