8
$\begingroup$

Consider the following integer linear program (where $A$ is an integer matrix, $b$ an integer vector, and $c$ a positive integer vector): $$ \text{minimize}~~~ c\cdot x \\ \text{subject to}~~~ A\cdot x \geq b, ~~~ x\geq 0, ~~~ x~\text{is an integer vector.} $$ Denote its optimal value by $OPT(A,b,c)$. Computing $OPT(A,b,c)$ is NP-hard in general. We are interested in computing an upper bound on $OPT(A,b,c)$, that is, a number $M = MAX(A,b,c)$ such that $OPT(A,b,c) \leq M$, and the size of $M$ (in bits) is polynomial in the size of $A,b,c$.

Is there an algorithm that, given $A,b,c$, computes such an upper bound $MAX(A,b,c)$ in time polynomial in the size of $A,b,c$?

$\endgroup$
14
  • 2
    $\begingroup$ What would you want such M to be if the ILP is infeasible? $\endgroup$ Commented May 4, 2023 at 17:07
  • 2
    $\begingroup$ @SamuelBismuth, can't you consider the relaxation of the dual of your problem? Since the dual is a maximization problem (and the optimums coincide), when you relax the constraints you are going to find an upper bound on the optimum of the original minimization problem. $\endgroup$ Commented May 4, 2023 at 18:03
  • 2
    $\begingroup$ @Steven The maximum of the fractional dual problem (maximization) is equal to the minimum of the fractional primal problem (minimization), which is smaller than the minimum of the integral primal problem. Therefore, it is a lower bound and not an upper bound. $\endgroup$ Commented May 12, 2023 at 8:53
  • 1
    $\begingroup$ @SilvioM Is it always possible to find an admissible integer vector in polynomial time? $\endgroup$ Commented Aug 30, 2023 at 2:32
  • 1
    $\begingroup$ @BernardoSubercaseaux yes, this special case could be interesting. $\endgroup$ Commented Sep 28, 2023 at 10:13

2 Answers 2

2
$\begingroup$

Here is a partial answer, for the case when $b$ is strictly positive (that is, $\min_j b_j > 0$). As discussed afterwards it also implicitly handles the case, mentioned in the comments, when $A$ is non-negative.

Lemma 1. There is a poly-time algorithm that, given any instance $(A, b, c)$ with strictly positive $b$, either returns such an upper bound $\text{MAX}(A, b, c)$, or determines that the ILP is infeasible.

Proof. Here is the algorithm:

  1. let LP$(A, b, c)$ be the fractional relaxation, that is, the linear program $\max \{c\cdot x : Ax \ge b, x \ge 0\}$
  2. let $x$ be an optimal solution to LP$(A, b, c)$ (compute $x$ with any poly-time LP solver)
  3. if LP$(A, b, c)$ is infeasible, return INFEASIBLE
  4. let $\lambda = 1 + \max_j \sum_i |A_{ij}| / b_j$
  5. return $c\cdot \hat x$ where integer vector $\hat x$ is defined by $\hat x_i = \lceil \lambda x_i \rceil$

By standard results the algorithm runs in polynomial time (even counting the bit-complexity of the arithmetic). It remains to show correctness.

In the case that LP$(A, b, c)$ is infeasible, there is no solution to the LP, so there is no solution to the original ILP, so the algorithm (which returns INFEASIBLE in this case) is correct.

In the remaining case, $x$ is a basic feasible solution (which by standard results has polynomial bit complexity). Observe that $\hat x$ is also feasible. Indeed, for any row $A_j$ of $A$, we have

$$ A_j \hat x = \sum_i A_{ij} \hat x_{ij} \ge \sum_i (A_{ij} \lambda x_{ij} - |A_{ij}|) \ge \lambda b_j - \sum_i |A_{ij}| \ge b_j, $$ where the last inequality follows from the choice of $\lambda$.

So $\hat x$ is a feasible integer solution, i.e., a solution to the original ILP. So $c\cdot \hat x$ is a valid upper bound on the ILP value. Finally, observe that the bit complexity of $c\cdot \hat x$ is polynomial in the bit complexity of $(A, b, c, x)$, which is polynomial in the bit complexity of $(A, b, c)$. $~~~~\Box$

Non-negative $A$

Lemma 1 implicitly handles all instances in which $A$ is non-negative, because in any such instance all constraints with $b_j \le 0$ can be deleted without affecting the optimal cost (as those constraints are trivially satisfied by every $x\ge 0$).

The general case

The approach above (for the case when $b$ is strictly positive) is to find a feasible integer solution and return its cost. This approach will not extend easily to the general case, because in general finding a feasible integer solution to the ILP is NP-hard (even with the promise that one exists).

$\endgroup$
1
  • $\begingroup$ I think that for the general case, by standard results any basic feasible solution has polynomial bit complexity, and this extends to any integer solution that is a convex combination of basic feasible solutions. So, in the case that the polytope is bounded, every integer solution is a convex combination of basic feasible solutions, and so has polynomial bit complexity. The unresolved case is when the polytope is unbounded: in an unbounded polytope, if the polytope contains an integer solution, must there be one w/ polynomial bit complexity? $\endgroup$ Commented Oct 1, 2024 at 15:13
-1
$\begingroup$

While The bound given by @SilvioM in comments has polynomial size in A,b,c as requested, one cannot expect a polynomial time approximation for ILPs (see the second paragraph), and hence, probably also not an an upperbound polynomial in OPT. So this is probably as good as it gets.

The reason for this is that the well-known formulation of TSP as ILP is clearly a reduction from TSP to ILP. Since the general TSP does not admit a constant factor approximation, even when all weights are inegers (try to find a gap-reduction from Hamiltonian Cycle as a homework), and since this reduction clearly preserves approximation (since any feasible solution of one of them is a feasible solution of the other having the same target value), this excludes a constant factor approximation unless P=NP.

$\endgroup$
1
  • 1
    $\begingroup$ You mentioned a constant factor approximation, but the question is about a polynomial-size upper bound. $\endgroup$ Commented Sep 1, 2023 at 1:20

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.