Sorry, you need to enable JavaScript to visit this website.

Meta-Learning with Versatile Loss Geometries for Fast Adaptation Using Mirror Descent

DOI:
10.60864/a9y4-qy28
Citation Author(s):
Georgios B. Giannakis
Submitted by:
Yilang Zhang
Last updated:
6 April 2024 - 9:07pm
Document Type:
Poster
Document Year:
2024
Event:
Presenters:
Georgios B. Giannakis
Paper Code:
MLSP-P3.5
 

Utilizing task-invariant prior knowledge extracted from related tasks, meta-learning is a principled framework that empowers learning a new task especially when data records are limited. A fundamental challenge in meta-learning is how to quickly "adapt" the extracted prior in order to train a task-specific model within a few optimization steps. Existing approaches deal with this challenge using a preconditioner that enhances convergence of the per-task training process. Though effective in representing locally a quadratic training loss, these simple linear preconditioners can hardly capture complex loss geometries. The present contribution addresses this limitation by learning a nonlinear mirror map, which induces a versatile distance metric to enable capturing and optimizing a wide range of loss geometries, hence facilitating the per-task training. Numerical tests on few-shot learning datasets demonstrate the superior expressiveness and convergence of the advocated approach.

up
0 users have voted: