Multi-Agent A2C

多智能体 A2C

MAN-A2C(Multi-Agent Noncooperative A2C)

选择基线函数为状态价值函数 vi(t)(st)v_{i}^{(t)}(s_{t}),并通过采样轨迹对策略梯度进行近似:

θiJi(θi)=t=0TγtEs0Ea0Es1Ea1EstEat[θilnπi(atist; θi)(qi(t)(st, at)vi(t)(st)di(t)(st, at))]t=0Tγtθilnπi(atist; θi)di(t)(st, at)γ=1t=0Tθilnπi(atist; θi)di(t)(st, at)=t=0Tθilnπi(atist; θi)[Ert+1irt+1i+γEst+1vi(t+1)(st+1)vi(t)(st)]t=0Tθilnπi(atist; θi)[rt+1i+γvi(t+1)(st+1)vi(t)(st)]\begin{aligned} \nabla_{\theta_{i}} J_{i}(\theta_{i}) &= \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{t}} \mathcal{E}_{a_{t}} \Big[ \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) \Big( \underset{d_{i}^{(t)}(s_{t},\ a_{t})}{\underbrace{q_{i}^{(t)}(s_{t},\ a_{t}) - v_{i}^{(t)}(s_{t})}} \Big) \Big] \\[7mm] &\approx \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) d_{i}^{(t)}(s_{t},\ a_{t}) \overset{\gamma = 1}{\longrightarrow} \sum_{t = 0}^{\mathrm{T}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) d_{i}^{(t)}(s_{t},\ a_{t}) \\[7mm] &= \sum_{t = 0}^{\mathrm{T}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) \Big[ \mathcal{E}_{r_{t + 1}^{i}} r_{t + 1}^{i} + \gamma \mathcal{E}_{s_{t + 1}} v_{i}^{(t + 1)}(s_{t + 1}) - v_{i}^{(t)}(s_{t}) \Big] \\[7mm] &\approx \sum_{t = 0}^{\mathrm{T}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) \Big[ r_{t + 1}^{i} + \gamma v_{i}^{(t + 1)}(s_{t + 1}) - v_{i}^{(t)}(s_{t}) \Big] \end{aligned}

进一步利用价值网络 vi(swi)v_{i}(s \mid w_{i}) 和目标价值网络 vi(swi)v_{i}(s \mid w_{i}^{-}) 对 TD 差分项进行近似:

θiJi(θi)t=0Tθilnπi(atist; θi)[rt+1i+γvi(st+1wi)vi(stwi)]=t=0Tδtiθilnπi(atist; θi)\nabla_{\theta_{i}} J_{i}(\theta_{i}) \approx \sum_{t = 0}^{\mathrm{T}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) \Big[ r_{t + 1}^{i} + \gamma v_{i}(s_{t + 1} \mid w_{i}^{-}) - v_{i}(s_{t} \mid w_{i}) \Big] = \sum_{t = 0}^{\mathrm{T}} \delta_{t}^{i} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i})

类似地,TD 差分项 δti\delta_{t}^{i} 用于近似策略梯度的同时也用于更新价值网络参数:

i(wi)=12[δti]2=12[rt+1i+γvi(st+1wi)vi(stwi)]2wii(wi)=δtiwivi(stwi)\ell_{i}(w_{i}) = \frac{1}{2} \Big[ \delta_{t}^{i} \Big]^{2} = \frac{1}{2} \Big[ r_{t + 1}^{i} + \gamma v_{i}(s_{t + 1} \mid w_{i}^{-}) - v_{i}(s_{t} \mid w_{i}) \Big]^{2} \Rightarrow \nabla_{w_{i}} \ell_{i}(w_{i}) = -\delta_{t}^{i} \nabla_{w_{i}} v_{i}(s_{t} \mid w_{i})

在局部观测下,为了实现 MA-A2C 可以通过局部信息来建模策略或价值来在决策或训练时降低通信成本:

实现方式 描述 价值网络 策略网络 训练时通信 决策时通信
CTCE 中心化训练、中心化决策 πi(atist; θi)\pi_{i}(a_{t}^{i} \mid s_{t};\ \theta^{i}) vi(stwi)v_{i}(s_{t} \mid w_{i})
DTDE 去中心化训练、去中心化决策 πi(atiot; θi)\pi_{i}(a_{t}^{i} \mid o_{t};\ \theta^{i}) vi(otwi)v_{i}(o_{t} \mid w_{i}) × ×
CTDE 中心化训练、中心化决策 πi(atiot; θi)\pi_{i}(a_{t}^{i} \mid o_{t};\ \theta^{i}) vi(stwi)v_{i}(s_{t} \mid w_{i}) ×

CTCE

CTCE 方法完全遵循多智能体 A2C 中的定义,利用完整的信息可以做出更好的决策,但是通信成本较高。

在实现时利用梯度在线地交替进行每个智能体策略参数的更新和价值网络参数的更新:

w1w1+αδt1w1v1(stw1)w2w2+αδt2w2v2(stw2)wnwn+αδtnwnvn(stwn)\begin{gathered} w_{1} \leftarrow w_{1} + \alpha \delta_{t}^{1} \nabla_{w_{1}} v_{1}(s_{t} \mid w_{1}) \\[5mm] w_{2} \leftarrow w_{2} + \alpha \delta_{t}^{2} \nabla_{w_{2}} v_{2}(s_{t} \mid w_{2}) \\[5mm] \vdots \\[5mm] w_{n} \leftarrow w_{n} + \alpha \delta_{t}^{n} \nabla_{w_{n}} v_{n}(s_{t} \mid w_{n}) \end{gathered}

θ1θ1+βδt1θ1lnπ1(at1st; θ1)θ2θ2+βδt2θ2lnπ2(at2st; θ2)θnθn+βδtnθnlnπn(atnst; θn)\begin{gathered} \theta_{1} \leftarrow \theta_{1} + \beta \delta_{t}^{1} \nabla_{\theta_{1}} \ln \pi_{1}(a_{t}^{1} \mid s_{t};\ \theta_{1}) \\[5mm] \theta_{2} \leftarrow \theta_{2} + \beta \delta_{t}^{2} \nabla_{\theta_{2}} \ln \pi_{2}(a_{t}^{2} \mid s_{t};\ \theta_{2}) \\[5mm] \vdots \\[5mm] \theta_{n} \leftarrow \theta_{n} + \beta \delta_{t}^{n} \nabla_{\theta_{n}} \ln \pi_{n}(a_{t}^{n} \mid s_{t};\ \theta_{n}) \end{gathered}

DTDE

DTDE 方法中每个智能体将局部观测近似为全局状态,并独立地建模价值网络,参数更新方式为:

w1w1+αδt1w1v1(otw1)w2w2+αδt2w2v2(otw2)wnwn+αδtnwnvn(otwn)\begin{gathered} w_{1} \leftarrow w_{1} + \alpha \delta_{t}^{1} \nabla_{w_{1}} v_{1}(o_{t} \mid w_{1}) \\[5mm] w_{2} \leftarrow w_{2} + \alpha \delta_{t}^{2} \nabla_{w_{2}} v_{2}(o_{t} \mid w_{2}) \\[5mm] \vdots \\[5mm] w_{n} \leftarrow w_{n} + \alpha \delta_{t}^{n} \nabla_{w_{n}} v_{n}(o_{t} \mid w_{n}) \end{gathered}

θ1θ1+βδt1θ1lnπ1(at1ot; θ1)θ2θ2+βδt2θ2lnπ2(at2ot; θ2)θnθn+βδtnθnlnπn(atnot; θn)\begin{gathered} \theta_{1} \leftarrow \theta_{1} + \beta \delta_{t}^{1} \nabla_{\theta_{1}} \ln \pi_{1}(a_{t}^{1} \mid o_{t};\ \theta_{1}) \\[5mm] \theta_{2} \leftarrow \theta_{2} + \beta \delta_{t}^{2} \nabla_{\theta_{2}} \ln \pi_{2}(a_{t}^{2} \mid o_{t};\ \theta_{2}) \\[5mm] \vdots \\[5mm] \theta_{n} \leftarrow \theta_{n} + \beta \delta_{t}^{n} \nabla_{\theta_{n}} \ln \pi_{n}(a_{t}^{n} \mid o_{t};\ \theta_{n}) \end{gathered}

这种方法本质上是独立的单智能体 A2C 算法,忽略了智能体之间的相互影响,在实践中效果往往不佳。

CTDE

CTDE 方法在训练时通过通信收集所有智能体的观测组成全局状态进行训练,参数更新方式为:

w1w1+αδt1w1v1(stw1)w2w2+αδt2w2v2(stw2)wnwn+αδtnwnvn(stwn)\begin{gathered} w_{1} \leftarrow w_{1} + \alpha \delta_{t}^{1} \nabla_{w_{1}} v_{1}(s_{t} \mid w_{1}) \\[5mm] w_{2} \leftarrow w_{2} + \alpha \delta_{t}^{2} \nabla_{w_{2}} v_{2}(s_{t} \mid w_{2}) \\[5mm] \vdots \\[5mm] w_{n} \leftarrow w_{n} + \alpha \delta_{t}^{n} \nabla_{w_{n}} v_{n}(s_{t} \mid w_{n}) \end{gathered}

θ1θ1+βδt1θ1lnπ1(at1ot; θ1)θ2θ2+βδt2θ2lnπ2(at2ot; θ2)θnθn+βδtnθnlnπn(atnot; θn)\begin{gathered} \theta_{1} \leftarrow \theta_{1} + \beta \delta_{t}^{1} \nabla_{\theta_{1}} \ln \pi_{1}(a_{t}^{1} \mid o_{t};\ \theta_{1}) \\[5mm] \theta_{2} \leftarrow \theta_{2} + \beta \delta_{t}^{2} \nabla_{\theta_{2}} \ln \pi_{2}(a_{t}^{2} \mid o_{t};\ \theta_{2}) \\[5mm] \vdots \\[5mm] \theta_{n} \leftarrow \theta_{n} + \beta \delta_{t}^{n} \nabla_{\theta_{n}} \ln \pi_{n}(a_{t}^{n} \mid o_{t};\ \theta_{n}) \end{gathered}

虽然策略被近似依赖于局部观测,但是中心化的训练协调了不同智能体的行为,相较于 DTDE 可以获得更好的效果。在训练完成后,每个智能体独立地通过局部观测进行去中心化决策,效率又优于 CTCE。

MAC-A2C(Multi-Agent Cooperative A2C)

在完全合作关系下,所有智能体的奖励一致,可以对中心化训练中状态价值的建模简化得到 MAC-A2C 算法:

ww+αδtwv(stw)w \leftarrow w + \alpha \delta_{t} \nabla_{w} v(s_{t} \mid w)

θ1θ1+βδtθ1lnπ1(at1ot; θ1)θ2θ2+βδtθ2lnπ2(at2ot; θ2)θnθn+βδtθnlnπn(atnot; θn)\begin{gathered} \theta_{1} \leftarrow \theta_{1} + \beta \delta_{t} \nabla_{\theta_{1}} \ln \pi_{1}(a_{t}^{1} \mid o_{t};\ \theta_{1}) \\[5mm] \theta_{2} \leftarrow \theta_{2} + \beta \delta_{t} \nabla_{\theta_{2}} \ln \pi_{2}(a_{t}^{2} \mid o_{t};\ \theta_{2}) \\[5mm] \vdots \\[5mm] \theta_{n} \leftarrow \theta_{n} + \beta \delta_{t} \nabla_{\theta_{n}} \ln \pi_{n}(a_{t}^{n} \mid o_{t};\ \theta_{n}) \end{gathered}

COMA(COunterfactual Multi-Agent policy gradient)

在 MAC-A2C 中,用于指导每个智能体策略提升的基线函数均为:

v(t)(st)=Eatq(t)(st, at)=Eat1π1(st)Eat2π2(st)Eatnπn(st)q(t)(st, at1, at2, , atn)v^{(t)}(s_{t}) = \mathcal{E}_{a_{t}} q^{(t)}(s_{t},\ a_{t}) = \mathcal{E}_{a_{t}^{1} \sim \pi_{1}(\cdot \mid s_{t})} \mathcal{E}_{a_{t}^{2} \sim \pi_{2}(\cdot \mid s_{t})} \cdots \mathcal{E}_{a_{t}^{n} \sim \pi_{n}(\cdot \mid s_{t})} q^{(t)}(s_{t},\ a_{t}^{1},\ a_{t}^{2},\ \cdots,\ a_{t}^{n})

当固定其他智能体的动作 ati=atatia_{t}^{-i} = a_{t} \setminus a_{t}^{i},如果某个智能体所有动作 atia_{t}^{i} 都满足 q(t)(st, ati, ati)>v(t)(st)q^{(t)}(s_{t},\ a_{t}^{i},\ a_{t}^{-i}) > v^{(t)}(s_{t}),则:

θiθi+αθilnπi(atist; θi)[q(t)(st, at)v(t)(st)]\theta_{i} \leftarrow \theta_{i} + \alpha \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) \Big[ q^{(t)}(s_{t},\ a_{t}) - v^{(t)}(s_{t}) \Big]

总是会倾向于增加 πi(atist; θi)\pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}),即使这一动作的价值低于这一智能体其他动作的平均价值:

q(t)(st, ati, ati)<vi(t)(st, ati)=Eatiπi(st)q(t)(st, ati, ati)q^{(t)}(s_{t},\ a_{t}^{i},\ a_{t}^{-i}) < v_{i}^{(t)}(s_{t},\ a_{t}^{-i}) = \mathcal{E}_{a_{t}^{i} \sim \pi_{i}(\cdot \mid s_{t})} q^{(t)}(s_{t},\ a_{t}^{i},\ a_{t}^{-i})

也相应地会降低其他更优动作的采样概率。由于信度分配过于均匀,导致一些智能体在以上的随机梯度下更新策略的效率较低。而 COMA 因此将基线函数改为以上的反事实基线函数 vi(t)(st, ati)v_{i}^{(t)}(s_{t},\ a_{t}^{-i})

θiJi(θi)=t=0TγtEs0Ea0Es1Ea1EstEat[θilnπi(atist; θi)(q(t)(st, at)vi(t)(st, ati))]γ=1t=0Tθilnπi(atist; θi)[q(st, ati, atiw)atiπi(atist; θi)q(st, ati, atiw)]\begin{aligned} \nabla_{\theta_{i}} J_{i}(\theta_{i}) &= \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{t}} \mathcal{E}_{a_{t}} \Big[ \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) \Big( q^{(t)}(s_{t},\ a_{t}) - v_{i}^{(t)}(s_{t},\ a_{t}^{-i}) \Big) \Big] \\[7mm] &\overset{\gamma = 1}{\approx} \sum_{t = 0}^{\mathrm{T}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) \left[ q(s_{t},\ a_{t}^{i},\ a_{t}^{-i} \mid w) - \sum_{a_{t}^{i}} \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) q(s_{t},\ a_{t}^{i},\ a_{t}^{-i} \mid w) \right] \end{aligned}


Multi-Agent A2C
http://example.com/2024/08/05/MAA2C/
Author
木辛
Posted on
August 5, 2024
Licensed under