Stationary equilibrium
A stationary equilibrium is a Nash equilibrium in which players may condition their actions only on the current state of the game, not on past actions.
(Remark: If all states are payoff-relevant, a stationary equilibrium is often also called Markov perfect equilibrium.)
More formally, a stationary strategy \(\sigma_i(s)\) for player \(i\) is a function \(\sigma_i: S \rightarrow \Delta(A_{si})\) on the domain of states, mapping state \(s\) to a probability distribution \(\mathbb{P}\) over state-specific actions \(A_{si}\) such that \(\sigma_i(s,a_{si})=\mathbb{P}(a_{si}|s)\). A stationary equilibrium is a Nash equilibrium in stationary strategies.
Due to Bellman’s principle of optimality, stationary equilibria admit a recursive representation. A stationary strategy profile \(\boldsymbol{\sigma}=(\sigma_{sia})_{s\in S,i\in I, a\in A_{si}}\) together with state-player values \(\boldsymbol{V}=(V_{si})_{s\in S,i\in I}\) constitutes a stationary equilibrium if and only if
for all states \(s\in S\) and players \(i\in I\).
Finding a stationary equilibrium amounts to solving the above maximization (which is generally difficult) for equilibrium strategies \(\boldsymbol{\sigma}\) (and corresponding values \(\boldsymbol{V}\)). The corresponding the necessary and sufficient conditions can be expressed as a (potentially high-dimensional and nonlinear) system of equations. To solve it, sGameSolver relies on a solution method called homotopy continuation.