5
$\begingroup$

Consider an $n$-player game exerting effort to increase their probability of winning. Let $x_i$ denote $i$'s probability of winning and $e_i$ $i$'s effort:

$$x_i=\frac{e_i}{\sum_{j\in N}e_j}$$

$i$'s payoff is

$$u_i=v_i\frac{e_i}{\sum_{j\in N}e_j}-e_i$$

Then there exists a vector $\mathbf{e} \in \mathbb{R}_+$, where $x_i$ are the winning probabilities that are a unique solution of the following convex optimization problem:

$$maximize \hspace{0.3cm} \sum_{i \in N}\hat{u}(x_i)$$ $$subject\hspace{0.1cm} to \hspace{0.3cm} \sum_{i \in N}x_i\leq1$$ $$and\hspace{0.3cm}x_i\geq0$$

where $$\hat{u_i}=v_ix_i\bigg(1-\frac{1}{2}x_i\bigg)$$

Where did this $\hat{u_i}$ term come from?


$u_i$ can be rewritten as $$v_ix_i-\frac{x_i}{\sum_{j\in N}e_j}$$

But I can't see how $\hat{u_i}$ was arrived at. Any ideas?


Later is the following definition

$$\hat{u_i}=(1-x_i)u_i(x_i)+x_i\bigg(\frac{1}{x_i}\int_0^{x_i} u_i(z_i)dz\bigg)$$

Maybe this is useful for understanding the expression $\hat{u_i}$? But I can't figure out how this relates to the utility maximization problem.

$\endgroup$
5
  • $\begingroup$ If you're interested in seeing a picture of the derivation in the textbook, please upvote so I have enough points to paste a screeshot. Thanks! $\endgroup$ Commented Jun 11, 2018 at 20:39
  • $\begingroup$ I think that $\sum_i x_i = 1$ is one of the conditions since the total probability of a victor must be 1. (In some games everyone can lose, so not strictly true) $\endgroup$ Commented Jun 11, 2018 at 20:40
  • $\begingroup$ @ErikKjellgren Thanks for your comment. This is the condition in the textbook (Contest Theory, Milan Vojnovic, pg. 174). $\endgroup$ Commented Jun 11, 2018 at 20:44
  • $\begingroup$ Can you derive the optimal $e_i$ for one player, given that the other $e_j$ are constant? $\endgroup$ Commented Jun 12, 2018 at 14:21
  • $\begingroup$ @LinAlg, I believe the maximization problem has a solution vector $\mathbf{e}$ that is all the equilibria. $\endgroup$ Commented Jun 12, 2018 at 15:10

1 Answer 1

0
$\begingroup$

What matters is to express the Nash equilibrium conditions in terms of the winning probabilities $x_i$ and then find a concave “potential” whose KKT conditions reproduce those equilibrium conditions. In your contest, $x_i=e_i/E$ with $E=\sum_j e_j$. Player $i$’s payoff is $u_i(e)=v_i x_i-e_i$. Holding rivals’ efforts fixed, the marginal effect of $e_i$ on $x_i$ is $$ \frac{\partial x_i}{\partial e_i}=\frac{E-e_i}{E^2}=\frac{1-x_i}{E}. $$ Hence the first-order condition for any player with $e_i>0$ is $$ \frac{\partial u_i}{\partial e_i}=v_i\,\frac{\partial x_i}{\partial e_i}-1 =\frac{v_i(1-x_i)}{E}-1=0, $$ i.e. $$ v_i(1-x_i)=E \quad\text{if } e_i>0,\qquad v_i(1-x_i)\le E \quad\text{if } e_i=0. $$ Together with the feasibility $\sum_i x_i\le 1$ (with equality when $E>0$), these inequalities exactly characterize the Nash equilibrium in $x$-space.

Now take the convex program $$ \max_{x\ge 0,\;\sum x_i\le 1}\; \sum_i \hat u_i(x_i), \qquad\text{with }\;\hat u_i'(x_i)=v_i(1-x_i). $$ If $\lambda$ is the multiplier on $\sum_i x_i\le 1$ and $\mu_i$ the multipliers on $x_i\ge 0$, the KKT conditions for an interior $x_i>0$ are $$ \hat u_i'(x_i)-\lambda=0 \;\Longleftrightarrow\; v_i(1-x_i)=\lambda, $$ and for a boundary $x_i=0$: $v_i(1-x_i)\le \lambda$. Comparing with the Nash conditions above shows that the Lagrange multiplier $\lambda$ plays exactly the role of the aggregate effort $E$, so any solution of the program coincides with the contest equilibrium. Because $\hat u_i''(x_i)=-v_i<0$, the objective is strictly concave and the solution is unique.

Integrating $\hat u_i'(x_i)=v_i(1-x_i)$ gives $$ \hat u_i(x_i)=\int_0^{x_i} v_i(1-z)\,dz=v_i\Big(x_i-\tfrac12 x_i^2\Big) =v_i x_i\Big(1-\tfrac12 x_i\Big), $$ which is the formula you quoted (up to an irrelevant constant). This is where the $\hat u_i$ comes from: it is a potential whose gradient reproduces each player’s marginal payoff once the effort externality is represented by the common constraint $\sum x_i\le 1$.

Your alternative expression $$ \hat u_i(x_i)=(1-x_i)u_i(x_i)+x_i\Big(\frac{1}{x_i}\int_0^{x_i} u_i(z)\,dz\Big) =(1-x_i)u_i(x_i)+\int_0^{x_i} u_i(z)\,dz $$ is just a convenient antiderivative identity. Differentiating yields $$ \frac{d}{dx_i}\hat u_i(x_i)=(1-x_i)\,u_i'(x_i), $$ so whenever the “gross” utility in $x$-space is $u_i(x_i)=v_i x_i$ (the prize term), you get $\hat u_i'(x_i)=(1-x_i)v_i$ and hence the same $\hat u_i=v_i(x_i-\tfrac12 x_i^2)$. Note that when you rewrote $u_i$ as $v_i x_i-\frac{x_i}{\sum_j e_j}$ you dropped a factor: since $e_i=x_i\sum_j e_j$, the cost term is $-e_i=-x_i\sum_j e_j$, not $-x_i/\sum_j e_j$. The cost is absorbed in the optimization via the shared constraint and its multiplier $\lambda=E$, which is precisely why maximizing $\sum_i \hat u_i(x_i)$ delivers the equilibrium $x$-vector.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.