4
$\begingroup$

Context:

Consider $M$ finite state systems with evolution given by: $$ x^i_{k+1} = f(x_k^i,u_k) $$ where $x_k^i\in\{1,\dots,X\}$ is the state of system $i\in\{1,\dots,M\}$, $k\geq 0$, and $u_k\in\{1,\dots, U\}$ is some input which we can use at time $k$ to control the evolution of all systems (notice that $u_k$ is shared across the $M$ systems). In addition, $f:\{1,\dots,X\}\times\{1,\dots,U\}\to\{1,\dots,X\}$ is the same for all systems as well.

The goal is to choose the correct sequence $u_0,\dots,u_K$ to minimize $$ J = \sum_{i=1}^M J_i, \ \ \ \ J_i = \sum_{k=0}^K g(x^i_k, u_k) $$ where $K>0$ is some maximum time to be considered, and given some initial states $x_0^1,\dots,x_0^M$.

If $M=1$, I have observed that this is basically a deterministic finite state system optimal problem, which can be solved using dynamic programming. The solution is to construct a transition graph of how $x_k$ evolves in time according to the decisions $u_k$ and vertices weighed by $g(x_k,u_k)$. Then, one can find the shortest path in such transition graph, starting from $x_0$. This can be solved in polynomial time in $K, X, U$. (I am following "Dynamic programming and optimal control", Ch 2 from D. Bertsekas for this part)

Issue:

For general $M$, one can stack all state vectors in $x_k = [x_k^1,\dots,x_k^M]^T\in \{1,\dots,X\}^M$ which result in a deterministic finite state system as well. However, now the amount of states is $X^M$. Thus, if we attempt to use the previous approach using dynamic programming, the most we can achieve is polynomial time in $X^M$ which is not polynomial in $M$.

Question:

Is there a way to solve this problem, resulting in a polynomial time in $M$?

I'm not sure this is possible. However, the fact that the systems do not interact with each other (other than sharing the input $u_k$) makes me think there is something smarter we can do other than stacking together the state of all systems.

$\endgroup$

1 Answer 1

2
$\begingroup$

It's NP-hard, so no, there is no polynomial-time algorithm (unless P=NP).

You can construct a reduction from 3SAT. A formula $\varphi$ with $M$ clauses on $K$ variables corresponds to $M$ finite state systems run for $K$ steps. Here $u_k$ is the $k$th variable, and you construct $f,g,x_0$ so that $\sum_k g(x^i_k,u_k)$ is zero if $u_1,\dots,u_K$ satisfy clause $i$ of $\varphi$, or $>0$ if not. (This can be easily arranged. You have $2M$ states, each of the form $(i,\text{True})$ or $(i,\text{False})$; $x^i_0=(i,\text{True})$; $f$ is designed so that if you read in a variable that violates clause $i$ while in state $(i,\text{True})$, you transition to $(i,\text{False})$; and $g(i,\text{True})=0$ and $g(i,\text{False})=1$.) Now there exists an input sequence such that $J=0$ iff $\varphi$ is satisfiable.

$\endgroup$
1
  • $\begingroup$ Sounds reasonable. Thanks. $\endgroup$ Commented Nov 18, 2021 at 8:31

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.