Stochastic Game / Markov Game

随机博弈 / 马尔可夫博弈

基础定义

多个智能体在同一个环境进行的交互可以被建模为一个随机博弈过程:

组成元素 数学定义 组成元素 数学定义
智能体集合 N={1, 2, , n}\mathcal{N} = \{ 1,\ 2,\ \cdots,\ n \} 联合动作空间 A=A1×A2××An\mathcal{A} = \mathcal{A}^{1} \times \mathcal{A}^{2} \times \cdots \times \mathcal{A}^{n}
状态空间 S\mathcal{S} 智能体 ii 的奖励函数 Ri:S×AR\mathcal{R}^{i} : \mathcal{S} \times \mathcal{A} \mapsto \mathbb{R}
智能体 ii 的动作空间 Ai\mathcal{A}^{i} 状态转移概率 P:S×A×S[0, 1]\mathcal{P} : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \mapsto [0,\ 1]

每个智能体在环境状态的基础上进行决策,不同的智能体拥有不同的独立的策略:

π1():S×A1[0, 1]π2():S×A2[0, 1]πn():S×An[0, 1]\pi_{1}(\cdot \mid \cdot) : \mathcal{S} \times \mathcal{A}^{1} \mapsto [0,\ 1] \quad \pi_{2}(\cdot \mid \cdot) : \mathcal{S} \times \mathcal{A}^{2} \mapsto [0,\ 1] \quad \cdots \quad \pi_{n}(\cdot \mid \cdot) : \mathcal{S} \times \mathcal{A}^{n} \mapsto [0,\ 1]

环境状态的转移取决于当前状态 sts_{t} 以及联合动作 at={at1, at2, , atn}a_{t} = \{ a_{t}^{1},\ a_{t}^{2},\ \cdots,\ a_{t}^{n} \},联合策略可以表示为:

π(atst)=π(at1, at2, , atnst)=i=1nπi(atist)\pi(a_{t} \mid s_{t}) = \pi(a_{t}^{1},\ a_{t}^{2},\ \cdots,\ a_{t}^{n} \mid s_{t}) = \prod_{i = 1}^{n} \pi_{i}(a_{t}^{i} \mid s_{t})

在随机博弈下,每个智能体的目标是最大化各自的累计奖励:

Ji(π)=Es0b0()Ea0π(s0)Es1p(s0, a0)Ea1π(s1)EsTp(sT1, aT1)EaTπ(sT)[t=0TγtRi(st, at)]J_{i}(\pi) = \mathcal{E}_{s_{0} \sim b_{0}(\cdot)} \mathcal{E}_{a_{0} \sim \pi(\cdot \mid s_{0})} \mathcal{E}_{s_{1} \sim p(\cdot \mid s_{0},\ a_{0})} \mathcal{E}_{a_{1} \mid \pi(\cdot \mid s_{1})} \cdots \mathcal{E}_{s_{\mathrm{T}} \sim p(\cdot \mid s_{\mathrm{T} - 1},\ a_{\mathrm{T} - 1})} \mathcal{E}_{a_{\mathrm{T}} \sim \pi(\cdot \mid s_{\mathrm{T}})} \left[ \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}^{i}(s_{t},\ a_{t}) \right]

类似地,定义每个智能体的动作价值函数以及状态价值函数:

qi(t)(st, atπ)=Est+1p(st, at)Eat+1π(st+1)EsTp(sT1, aT1)EaTπ(sT)[τ=tTγτtRi(sτ, aτ)]vi(t)(stπ)=Eatπ(st)Est+1p(st, at)Eat+1π(st+1)EsTp(sT1, aT1)EaTπ(sT)[τ=tTγτtRi(sτ, aτ)]\begin{gathered} q_{i}^{(t)}(s_{t},\ a_{t} \mid \pi) = \mathcal{E}_{s_{t + 1} \sim p(\cdot \mid s_{t},\ a_{t})} \mathcal{E}_{a_{t + 1} \sim \pi(\cdot \mid s_{t + 1})} \cdots \mathcal{E}_{s_{\mathrm{T}} \sim p(\cdot \mid s_{\mathrm{T} - 1},\ a_{\mathrm{T} - 1})} \mathcal{E}_{a_{\mathrm{T}} \sim \pi(\cdot \mid s_{\mathrm{T}})} \left[ \sum_{\tau = t}^{\mathrm{T}} \gamma^{\tau - t} \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \right] \\[7mm] v_{i}^{(t)}(s_{t} \mid \pi) = \mathcal{E}_{a_{t} \sim \pi(\cdot \mid s_{t})} \mathcal{E}_{s_{t + 1} \sim p(\cdot \mid s_{t},\ a_{t})} \mathcal{E}_{a_{t + 1} \sim \pi(\cdot \mid s_{t + 1})} \cdots \mathcal{E}_{s_{\mathrm{T}} \sim p(\cdot \mid s_{\mathrm{T} - 1},\ a_{\mathrm{T} - 1})} \mathcal{E}_{a_{\mathrm{T}} \sim \pi(\cdot \mid s_{\mathrm{T}})} \left[ \sum_{\tau = t}^{\mathrm{T}} \gamma^{\tau - t} \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \right] \end{gathered}

推广到无限期规划下:

qi(st, atπ)=Est+1p(st, at)Eat+1π(st+1)Est+2p(st+1, at+1)Eat+2π(st+2)[τ=tγτtRi(sτ, aτ)]vi(stπ)=Eatπ(st)Est+1p(st, at)Eat+1π(st+1)Est+2p(st+1, at+1)Eat+2π(st+2)[τ=tγτtRi(sτ, aτ)]\begin{gathered} q_{i}(s_{t},\ a_{t} \mid \pi) = \mathcal{E}_{s_{t + 1} \sim p(\cdot \mid s_{t},\ a_{t})} \mathcal{E}_{a_{t + 1} \sim \pi(\cdot \mid s_{t + 1})} \mathcal{E}_{s_{t + 2} \sim p(\cdot \mid s_{t + 1},\ a_{t + 1})} \mathcal{E}_{a_{t + 2} \sim \pi(\cdot \mid s_{t + 2})} \cdots \left[ \sum_{\tau = t}^{\infty} \gamma^{\tau - t} \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \right] \\[7mm] v_{i}(s_{t} \mid \pi) = \mathcal{E}_{a_{t} \sim \pi(\cdot \mid s_{t})} \mathcal{E}_{s_{t + 1} \sim p(\cdot \mid s_{t},\ a_{t})} \mathcal{E}_{a_{t + 1} \sim \pi(\cdot \mid s_{t + 1})} \mathcal{E}_{s_{t + 2} \sim p(\cdot \mid s_{t + 1},\ a_{t + 1})} \mathcal{E}_{a_{t + 2} \sim \pi(\cdot \mid s_{t + 2})} \cdots \left[ \sum_{\tau = t}^{\infty} \gamma^{\tau - t} \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \right] \end{gathered}

在以上定义下,每个智能体的动作价值函数和状态价值函数仍然满足贝尔曼期望方程。

系统设定

多智能体强化学习下对于智能体的奖励函数的一些特殊关系设定如下:

设定 描述
完全合作关系 所有智能体的利益一致,获得的奖励相同,有共同的目标
完全竞争关系 一方的收益是另一方的损失,双方奖励负相关,零和博弈下奖励总和为 0
合作竞争混合 智能体分为多个群组,组内为合作关系,组间为竞争关系

在多智能体下,策略学习收敛的标志是所有智能体的平均回报不再变化,达到纳什均衡,形式化的定义为:

 sS  iN:vi(sπ1, , πi, , πn)=maxπivi(sπ1, , πi, , πn)\forall\ s \in \mathcal{S}\ \forall\ i \in \mathcal{N} : v_{i}(s \mid \pi_{1}^{\star},\ \cdots,\ \pi_{i}^{\star},\ \cdots,\ \pi_{n}^{\star}) = \max_{\pi_{i}} v_{i}(s \mid \pi_{1}^{\star},\ \cdots,\ \pi_{i},\ \cdots,\ \pi_{n}^{\star})

即在任意状态 ss 下,任意智能体 ii 做出单方面的策略更改无法获得更高的期望回报。可以证明,在参与者、动作空间和状态空间有限时,随机博弈存在静态随机策略的纳什均衡(proof)。

策略梯度

在参数化的策略下,每个智能体的策略 πi\pi_{i} 的参数为 θi\theta_{i},每个智能体的目标函数对其策略参数的梯度为:

θiJi(θi)=θis0a0s1a1sTaT[b0(s0)j=1nt=0Tπj(atjst; θj)t=1Tp(stst1, at1)t=0TγtRi(st, at)]=Es0Ea0Es1Ea1EsTEaT[(t=0Tθilnπi(atist; θi))(t=0TγtRi(st, at))]\begin{aligned} \nabla_{\theta_{i}} J_{i}(\theta_{i}) &= \nabla_{\theta_{i}} \sum_{s_{0}} \sum_{a_{0}} \sum_{s_{1}} \sum_{a_{1}} \cdots \sum_{s_{\mathrm{T}}} \sum_{a_{\mathrm{T}}} \left[ b_{0}(s_{0}) \prod_{j = 1}^{n} \prod_{t = 0}^{\mathrm{T}} \pi_{j}(a_{t}^{j} \mid s_{t};\ \theta_{j}) \prod_{t = 1}^{\mathrm{T}} p(s_{t} \mid s_{t - 1},\ a_{t - 1}) \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}^{i}(s_{t},\ a_{t}) \right] \\[7mm] &= \mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \mathcal{E}_{a_{\mathrm{T}}} \left[ \left( \sum_{t = 0}^{\mathrm{T}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t};\ \theta_{i}) \right) \cdot \left( \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{R}^{i}(s_{t},\ a_{t}) \right) \right] \end{aligned}

考虑乘积因子 θilnπi(atist, θi)Ri(sτ, aτ)\nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) \mathcal{R}^{i}(s_{\tau},\ a_{\tau})t>τt > \tau 时的期望为:

Es0Ea0Es1Ea1EsTEaT[θilnπi(atist, θi)Ri(sτ, aτ)]=Es0Ea0Es1Ea1EstEat[θilnπi(atist, θi)Ri(sτ, aτ)]\mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \mathcal{E}_{a_{\mathrm{T}}} \Big[ \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \Big] = \mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{t}} \mathcal{E}_{a_{t}} \Big[ \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \Big]

其中:

Eat[θilnπi(atist, θi)Ri(sτ, aτ)]=Ri(sτ, aτ)Eatθilnπi(atist, θi)=Ri(sτ, aτ)Eatiθilnπi(atist, θi)=0\begin{aligned} \mathcal{E}_{a_{t}} \Big[ \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \Big] = \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \mathcal{E}_{a_{t}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) = \mathcal{R}^{i}(s_{\tau},\ a_{\tau}) \mathcal{E}_{a_{t}^{i}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) = 0 \end{aligned}

因此所有 t>τt > \tau 的乘积因子项均为 0,策略梯度简化为:

θiJi(θi)=Es0Ea0Es1Ea1EsTEaT[t=0Tθilnπi(atist, θi)τ=tTγτRi(st, at)]=Es0Ea0Es1Ea1EstEat[t=0Tθilnπi(atist, θi)γtEst+1Eat+1EsTEaTτ=tTγτtRi(sτ, aτ)qi(t)(st, at)]=t=0TγtEs0Ea0Es1Ea1EstEat[θilnπi(atist, θi)qi(t)(st, at)]\begin{aligned} \nabla_{\theta_{i}} J_{i}(\theta_{i}) &= \mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \mathcal{E}_{a_{\mathrm{T}}} \left[ \sum_{t = 0}^{\mathrm{T}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) \sum_{\tau = t}^{\mathrm{T}} \gamma^{\tau} \mathcal{R}^{i}(s_{t},\ a_{t}) \right] \\[7mm] &= \mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{t}} \mathcal{E}_{a_{t}} \Bigg[ \sum_{t = 0}^{\mathrm{T}} \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) \cdot \gamma^{t} \underset{q_{i}^{(t)}(s_{t},\ a_{t})}{\underbrace{\mathcal{E}_{s_{t + 1}} \mathcal{E}_{a_{t + 1}} \cdots \mathcal{E}_{s_{\mathrm{T}}} \mathcal{E}_{a_{\mathrm{T}}} \sum_{\tau = t}^{\mathrm{T}} \gamma^{\tau - t} \mathcal{R}^{i}(s_{\tau},\ a_{\tau})}} \Bigg] \\[10mm] &= \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{t}} \mathcal{E}_{a_{t}} \Big[ \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) q_{i}^{(t)}(s_{t},\ a_{t}) \Big] \end{aligned}

类似地,可以在策略梯度中加入基线函数来降低估计的方差:

θiJi(θi)=t=0TγtEs0Ea0Es1Ea1EstEat[θilnπi(atist, θi)(qi(t)(st, at)b)]\nabla_{\theta_{i}} J_{i}(\theta_{i}) = \sum_{t = 0}^{\mathrm{T}} \gamma^{t} \mathcal{E}_{s_{0}} \mathcal{E}_{a_{0}} \mathcal{E}_{s_{1}} \mathcal{E}_{a_{1}} \cdots \mathcal{E}_{s_{t}} \mathcal{E}_{a_{t}} \Big[ \nabla_{\theta_{i}} \ln \pi_{i}(a_{t}^{i} \mid s_{t},\ \theta_{i}) \Big( q_{i}^{(t)}(s_{t},\ a_{t}) - b \Big) \Big]

其中基线函数 bb 不依赖于 at={at1, at2, , atn}a_{t} = \{ a_{t}^{1},\ a_{t}^{2},\ \cdots,\ a_{t}^{n} \},一般将基线函数设置为状态价值函数 vi(t)(st)v_{i}^{(t)}(s_{t})


Stochastic Game / Markov Game
http://example.com/2024/08/05/SG/
Author
木辛
Posted on
August 5, 2024
Licensed under