Reptile: On First-Order Meta-Learning Algorithms

Ines ·
更新时间:2024-11-14
· 863 次阅读

On First-Order Meta-Learning Algorithms

Paper:https://arxiv.org/pdf/1803.02999.pdf
Code:https://github.com/openai/supervised-reptile
Tips:OpenAi的一篇相似MAML的Meta-learning相关的paper。
(阅读笔记)

1.Main idea 目标旨在实现相同分布的一类任务的少量样本快速学习。This paper considers meta-learning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution. 类似于first-order MAML,忽略了二阶偏微分。并且指出了其实现更为简单。 Reptile的作为meta-learning的方法,训练还是和传统方法很相似。Reptile is so similar to joint training that it is especially surprising that it works as a meta-learning algorithm. 做出了first-order MAML和reptile的理论分析。 2.MAML回顾

回顾了MAML相关工作。
目标是求解下式,其中τ\tauτ是不同的任务集,ϕ\phiϕ是初始参数,LLL是损失函数,UτkU_{\tau}^{k}Uτk​表示从任务集τ\tauτ抽样出来训练的第kkk次的参数更新操作:
min⁡ϕEτ[Lτ(Uτk(ϕ))] \min_{\phi}\mathbb{E}_{\tau}[L_{\tau}(U_{\tau}^{k}(\phi)) ] ϕmin​Eτ​[Lτ​(Uτk​(ϕ))]
AAA是原始训练任务集,BBB是新任务集。MAML的训练操作仍然对原始任务集进行训练,但是其损失函数却是针对的BBB,如下所示:
min⁡ϕEτ[Lτ,B(Uτ,A(ϕ))] \min_{\phi}\mathbb{E}_{\tau}[L_{\tau,B}(U_{\tau,A}(\phi)) ] ϕmin​Eτ​[Lτ,B​(Uτ,A​(ϕ))]
找梯度即需对参数ϕ\phiϕ求偏导(复合函数求导):
g=∂Lτ,B(Uτ,A(ϕ))∂ϕ=Lτ,B′(Uτ,A(ϕ))×Uτ,A′(ϕ)=∂Lτ,B(Uτ,A(ϕ))∂Uτ,A(ϕ)×∂Uτ,A(ϕ)∂ϕ g=\frac{\partial L_{\tau,B}(U_{\tau,A}(\phi))}{\partial \phi}\\ \\=L_{\tau,B}'(U_{\tau,A}(\phi)) \times U_{\tau,A}'(\phi)=\frac{\partial L_{\tau,B}(U_{\tau,A}(\phi))}{\partial U_{\tau,A}(\phi)} \times \frac{\partial U_{\tau,A}(\phi)}{\partial \phi} g=∂ϕ∂Lτ,B​(Uτ,A​(ϕ))​=Lτ,B′​(Uτ,A​(ϕ))×Uτ,A′​(ϕ)=∂Uτ,A​(ϕ)∂Lτ,B​(Uτ,A​(ϕ))​×∂ϕ∂Uτ,A​(ϕ)​
使用恒等操作(对第二项偏微分变为常量1),得到First-order MAML为:
g=∂Lτ,B(Uτ,A(ϕ))∂Uτ,A(ϕ) g=\frac{\partial L_{\tau,B}(U_{\tau,A}(\phi))}{\partial U_{\tau,A}(\phi)} g=∂Uτ,A​(ϕ)∂Lτ,B​(Uτ,A​(ϕ))​
即损失下降梯度的方向为在任务集AAA得到参数ϕ\phiϕ的情况下,通过对测试集BBB得到的损失最小化的方向即是外循环的方向。

3.Reptile 算法流程如下所示:
在这里插入图片描述
注意到可以一次迭代中将ϕ~\widetilde{\phi}ϕ​进行kkk步后,最后才确定梯度的方向。
作者:强大源



ON order meta

需要 登录 后方可回复, 如果你还没有账号请 注册新账号