DPG

Deterministic Policy Gradient

Deep Deterministic Policy Gradient(DDPG)

在动作空间连续时无法通过策略网络 πθ(as)\pi_{\theta}(a \mid s) 来参数化策略,此时可以通过确定性策略 μθ(s)\mu_{\theta}(s) 来进行建模,并在确定性策略空间 D\mathcal{D} 中搜索最优策略。类似地,在确定性策略下的优化目标函数为:

J(θ)=Es0b0()Es1p(s0, μθ(s0))EsTp(sT1, μθ(sT1))[t=0TγtR(st, μθ(st))]J(\theta) = \mathcal{E}_{s_{0} \sim b_{0}(\cdot)} \mathcal{E}_{s_{1} \sim p(\cdot \mid s_{0},\ \mu_{\theta}(s_{0}))} \cdots \mathcal{E}_{s_{\mathrm{T}} \sim p(\cdot \mid s_{\mathrm{T} - 1},\ \mu_{\theta}(s_{\mathrm{T} - 1}))} \left[ \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \right]

目标函数对策略参数 θ\theta 求梯度可得:

θJ(θ)=θs0s1sT[b0(s0)t=0T1p(st+1st, μθ(st))t=0TγtR(st, μθ(st))]=s0s1sT[b0(s0)θ(t=0T1p(st+1st, μθ(st)))t=0TγtR(st, μθ(st))]+s0s1sT[b0(s0)t=0T1p(st+1st, μθ(st))θ(t=0TγtR(st, μθ(st)))]=s0s1sT[b0(s0)t=0T1p(st+1st, μθ(st))θ(t=0T1lnp(st+1st, μθ(st)))t=0TγtR(st, μθ(st))]+s0s1sT[b0(s0)t=0T1p(st+1st, μθ(st))θ(t=0TγtR(st, μθ(st)))]=Es0Es1EsT[(t=0T1θlnp(st+1st, μθ(st)))(t=0TγtR(st, μθ(st)))+t=0TγtθR(st, μθ(st))]\begin{aligned} \nabla_{\theta} J(\theta) &= \nabla_{\theta} \sum_{s_{0}} \sum_{s_{1}} \cdots \sum_{s_{\mathrm{T}}} \left[ b_{0}(s_{0}) \prod_{t = 0}^{\mathrm{T} - 1} p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \right] \\[7mm] &= \sum_{s_{0}} \sum_{s_{1}} \cdots \sum_{s_{\mathrm{T}}} \left[ b_{0}(s_{0}) \nabla_{\theta} \left( \prod_{t = 0}^{\mathrm{T} - 1} p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \right) \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \right] \\[7mm] &+ \sum_{s_{0}} \sum_{s_{1}} \cdots \sum_{s_{\mathrm{T}}} \left[ b_{0}(s_{0}) \prod_{t = 0}^{\mathrm{T} - 1} p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \nabla_{\theta} \left( \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \right) \right] \\[7mm] &= \sum_{s_{0}} \sum_{s_{1}} \cdots \sum_{s_{\mathrm{T}}} \left[ b_{0}(s_{0}) \prod_{t = 0}^{\mathrm{T} - 1} p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \nabla_{\theta} \left( \sum_{t = 0}^{\mathrm{T} - 1} \ln p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \right) \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \right] \\[7mm] &+ \sum_{s_{0}} \sum_{s_{1}} \cdots \sum_{s_{\mathrm{T}}} \left[ b_{0}(s_{0}) \prod_{t = 0}^{\mathrm{T} - 1} p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \nabla_{\theta} \left( \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \right) \right] \\[7mm] &= \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \left[ \left( \sum_{t = 0}^{\mathrm{T} - 1} \nabla_{\theta} \ln p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \right) \cdot \left( \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \right) + \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \nabla_{\theta} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \right] \end{aligned}

确定性策略梯度的第一部分可以通过类似的方法化简为:

Es0Es1EsT[t=0T1θlnp(st+1st, μθ(st))γt+1τ=t+1Tγτt1R(sτ, μθ(sτ))]=t=0T1γt+1Es0Es1Est+1[θlnp(st+1st, μθ(st))Est+2EsTτ=t+1Tγτt1R(sτ, μθ(sτ))vμθ(t+1)(st+1)]\begin{aligned} &\mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \left[ \sum_{t = 0}^{\mathrm{T} - 1} \nabla_{\theta} \ln p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \cdot \gamma^{t + 1} \sum_{\tau = t + 1}^{\mathrm{T}} \gamma^{\tau - t - 1} \mathcal{R}(s_{\tau},\ \mu_{\theta}(s_{\tau})) \right] \\[7mm] = &\sum_{t = 0}^{\mathrm{T} - 1} \gamma^{t + 1} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{t + 1}} \Bigg[ \nabla_{\theta} \ln p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) \cdot \underset{v_{\mu_{\theta}}^{(t + 1)}(s_{t + 1})}{\underbrace{\mathcal{E}_{s_{t + 2}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \sum_{\tau = t + 1}^{\mathrm{T}} \gamma^{\tau - t - 1} \mathcal{R}(s_{\tau},\ \mu_{\theta}(s_{\tau}))}} \Bigg] \end{aligned}

其中,确定性策略下的状态价值函数与动作价值函数为:

vμ(t)(st)=qμ(t)(st, μ(st))=Est+1Est+2EsT[τ=tTγτtR(sτ, μ(sτ))]v_{\mu}^{(t)}(s_{t}) = q_{\mu}^{(t)}(s_{t},\ \mu(s_{t})) = \mathcal{E}_{s_{t + 1}} \mathcal{E}_{s_{t + 2}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \left[ \sum_{\tau = t}^{\mathrm{T}} \gamma^{\tau - t} \mathcal{R}(s_{\tau},\ \mu(s_{\tau})) \right]

进一步整理确定性策略梯度形式为:

θJ(θ)=t=0T1γt+1Es0Es1Est+1[θlnp(st+1st, μθ(st))vμθ(t+1)(st+1)]+t=0TγtEs0Es1Est[θR(st, μθ(st))]=t=0T1γtEs0Es1Est[θR(st, μθ(st))+γEst+1[θlnp(st+1st, μθ(st))vμθ(t+1)(st+1)]]+Υθ(T)=t=0T1γtEs0Es1Est[θR(st, μθ(st))+γst+1θp(st+1st, μθ(st))vμθ(t+1)(st+1)]+Υθ(T)=t=0T1γtEs0Es1Estθ[R(st, μθ(st))+γst+1p(st+1st, μθ(st))vsg[μθ](t+1)(st+1)]+Υθ(T)=t=0TγtEs0Es1Estθqsg[μθ](t)(st, μθ(st))=t=0TγtEs0Es1Est[θμθ(st)aqμθ(t)(st, a)a=μθ(st)]\begin{aligned} \nabla_{\theta} J(\theta) &= \sum_{t = 0}^{\mathrm{T} - 1} \gamma^{t + 1} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{t + 1}} \bigg[ \nabla_{\theta} \ln p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) v_{\mu_{\theta}}^{(t + 1)}(s_{t + 1}) \bigg] + \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{t}} \bigg[ \nabla_{\theta} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) \bigg] \\[7mm] &= \sum_{t = 0}^{\mathrm{T} - 1} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{t}} \bigg[ \nabla_{\theta} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) + \gamma \mathcal{E}_{s_{t + 1}} \bigg[ \nabla_{\theta} \ln p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) v_{\mu_{\theta}}^{(t + 1)}(s_{t + 1}) \bigg] \bigg] + \Upsilon_{\theta}(\mathrm{T}) \\[7mm] &= \sum_{t = 0}^{\mathrm{T} - 1} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{t}} \bigg[ \nabla_{\theta} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) + \gamma \sum_{s_{t + 1}} \nabla_{\theta} p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) v_{\mu_{\theta}}^{(t + 1)}(s_{t + 1}) \bigg] + \Upsilon_{\theta}(\mathrm{T}) \\[7mm] &= \sum_{t = 0}^{\mathrm{T} - 1} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{t}} \nabla_{\theta} \left[ \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) + \gamma \sum_{s_{t + 1}} p(s_{t + 1} \mid s_{t},\ \mu_{\theta}(s_{t})) v_{\operatorname{sg}[\mu_{\theta}]}^{(t + 1)}(s_{t + 1}) \right] + \Upsilon_{\theta}(\mathrm{T}) \\[7mm] &= \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{t}} \nabla_{\theta} q_{\operatorname{sg}[\mu_{\theta}]}^{(t)}(s_{t},\ \mu_{\theta}(s_{t})) = \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{t}} \left[ \nabla_{\theta} \mu_{\theta}(s_{t}) \nabla_{a} q_{\mu_{\theta}}^{(t)}(s_{t},\ a) \bigg|_{a = \mu_{\theta}(s_{t})} \right] \end{aligned}

其中 sg[]\operatorname{sg}[\cdot] 为停止梯度算子,残余项 Υθ(T)\Upsilon_{\theta}(\mathrm{T}) 为:

Υθ(T)=γTEs0Es1EsTθR(st, μθ(st))=γTEs0Es1EsTθqsg[μθ](T)(st, μθ(st))\Upsilon_{\theta}(\mathrm{T}) = \gamma^{\mathrm{T}} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \nabla_{\theta} \mathcal{R}(s_{t},\ \mu_{\theta}(s_{t})) = \gamma^{\mathrm{T}} \mathcal{E}_{s_{0}} \mathcal{E}_{s_{1}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \nabla_{\theta} q_{\operatorname{sg}[\mu_{\theta}]}^{(\mathrm{T})} (s_{t},\ \mu_{\theta}(s_{t}))

DDPG 方法包含 actor 网络 μθ(s)\mu_{\theta}(s) 和用于估计 qμθ(s, a)q_{\mu_{\theta}}(s,\ a) 的 critic 网络 qw(s, a)q_{w}(s,\ a),在实现时为了平衡探索性,采样轨迹的行为策略在 μθ(s)\mu_{\theta}(s) 的基础上引入了随机噪声 ξ\xi,在这种异策略的形式下确定性策略梯度需要近似为:

θJ(θ)t=0TγtEs0b0()Ea0π(s0)Es1p(s0, a0)Ea1π(s1)Estp(st1, at1)[θμθ(st)aqw(st, a)a=μθ(st)]\nabla_{\theta} J(\theta) \approx \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{E}_{s_{0} \sim b_{0}(\cdot)} \mathcal{E}_{a_{0} \sim \pi(\cdot \mid s_{0})} \mathcal{E}_{s_{1} \sim p(\cdot \mid s_{0},\ a_{0})} \mathcal{E}_{a_{1} \sim \pi(\cdot \mid s_{1})} \cdots \mathcal{E}_{s_{t} \sim p(\cdot \mid s_{t - 1},\ a_{t - 1})} \left[ \nabla_{\theta} \mu_{\theta}(s_{t}) \nabla_{a} q_{w}(s_{t},\ a) \bigg|_{a = \mu_{\theta}(s_{t})} \right]

在异策略的训练模式下,为了更有效地利用样本,可以引入经验回放机制,同时将确定性策略梯度近似为:

θJ(θ)1ni=1nθqw(si, μθ(si))=1ni=1nθμθ(si)aqw(si, a)a=μθ(si)\nabla_{\theta} J(\theta) \approx \frac{1}{n} \sum_{i = 1}^{n} \nabla_{\theta} q_{w}(s_{i},\ \mu_{\theta}(s_{i})) = \frac{1}{n} \sum_{i = 1}^{n} \nabla_{\theta} \mu_{\theta}(s_{i}) \nabla_{a} q_{w}(s_{i},\ a) \bigg|_{a = \mu_{\theta}(s_{i})}

而 critic 网络采用时序差分误差的平方作为损失函数:

(w)=12ni=1n[ri+γqw(si, μθ(si))qw(si, ai)]2\ell(w) = \frac{1}{2n} \sum_{i = 1}^{n} \Big[ r_{i} + \gamma q_{w}(s_{i}',\ \mu_{\theta}(s_{i}')) - q_{w}(s_{i},\ a_{i}) \Big]^{2}

Twin Delayed Deep Deterministic Policy Gradient(TD3)

与 DQN 类似,DDPG 同样存在非均匀高估问题。具体来说,策略网络的参数经过不断更新会趋于最优策略:

μ(s)arg maxaqw(s, a)\mu^{\star}(s) \in \argmax_{a} q_{w}(s,\ a)

同时 critic 网络通过自举的方式进行更新,在上述最大化的动作选择策略下会逐渐累计正向误差造成高估。为了缓解高估问题,TD3 算法采用了以下几种技巧来获得比 DDPG 更好的效果:

Clipped Double Q

利用 Double Q 方法的思想,使用一个目标策略网络进行策略选择,同时使用目标价值网络进行 TD 目标计算。计算得到的 TD 误差被用于价值网络的更新。不同的是这种方法引入了两个价值网络:

网络类型 原网络 目标网络
actor μθ(s)\mu_{\theta}(s) μθ(s)\mu_{\theta^{-}}(s)
critic qw1(s, a)q_{w_{1}}(s,\ a) qw1(s, a)q_{w_{1}^{-}}(s,\ a)
critic qw2(s, a)q_{w_{2}}(s,\ a) qw2(s, a)q_{w_{2}^{-}}(s,\ a)

通过取两个目标价值网络的 TD 目标较小值作为两个价值网络更新的 TD 目标:

g=r+γmin(qw1(s, μθ(s)), qw2(s, μθ(s)))g = r + \gamma \min \Big( q_{w_{1}^{-}}(s,\ \mu_{\theta^{-}}(s)),\ q_{w_{2}^{-}}(s,\ \mu_{\theta^{-}}(s)) \Big)

动作噪声

通过目标策略网络选择的动作还可以进一步加入噪声,例如截断正态分布(保证噪声在范围 [c, c][-c,\ c] 内):

a^=μθ(s)+ξξiCN(0, σ2, c, c)\hat{a} = \mu_{\theta^{-}}(s) + \xi \Longleftarrow \xi_{i} \sim \mathcal{CN}(0,\ \sigma^{2},\ -c,\ c)

降低更新频率

策略网络、目标策略网络以及两个目标价值网络的更新频率应当慢于两个价值网络的更新频率,例如每间隔 kk 轮进行一次策略提升以及一次目标价值网络同步,提升算法的稳定性。


DPG
http://example.com/2024/07/19/DDPG/
Author
木辛
Posted on
July 19, 2024
Licensed under