If someone had told me months ago that work on Jacobsthal's function provided bounds for a generalized version of this problem, I might have walked away from the problem and done nothing more. As it is, I've taught myself quite a bit, and will share some of it.
Aaron Meyerowitz was kind enough not only to do some computations for me but also to show me a better way to compute some quantities I was interested in to solve the problem. Inspired by his efforts, I convinced myself that a sequence of error terms I was using satisfied $E_{i+1}(x) \le 2E_i(x)$, which then led me to improve Westzynthius's result to $Q*2^{g(n)}$ where $g(n)$ was $n/2 + O(\log(n))$.
Aaron also pointed me towards a sequence in the Online Encyclopedia of Integer Sequences which led me to Hagedorn's 2009 paper on computing certain values of Jacobsthal's function.
http://www.tcnj.edu/~hagedorn/papers/JacobPaper.pdf As a result, I have an answer to 2 of my questions and an improvement on a result in the literature. For question 1, the answer is yes, there is other work giving provable upper bounds on the order of $2^{\text{polylog}(n)}$. For question 2, the answer is also yes; a 1977 result of Harlan Stevens gives an explicit bound where an important step uses the equivalent of the Bonferroni inequalities. I looked over his argument and decided to tweak it to give an asymptotically better result. (Other work, especially by Iwaniec, give even better asymptotic upper bounds, but do not give explicit constants, so it is hard to tell precisely when those bounds are better.) I give most of Stevens' argument and some related material below.
$m$ will be a positive integer, and I prefer $m \gt 1$. Jacobsthal's function $j(m)$ gives the smallest positive integer $j$ such that, for any interval $I$ of $j$ integers of the form $[a+1,\ldots,a+j]$, it is guaranteed that one of the integers in $I$ is coprime to $m$, i.e. $\gcd(m,a+i)=1$ for at least one $a+i \in I$. Let $\text{rad}(m) = \prod_{p \text{ prime,} p \mid m} p$ be the largest squarefree factor of $m$; $j(\text{rad}(m))=j(m)$ and also $j(m)$ is the size of the largest gap between consecutive members of the set of integers which are relatively prime to $m$. In my problem above, I wanted nice upper bounds on $j(P_n)$, or Jacobsthal's function evaluated at the $n$th primorial.
There are lots of accessible results on $j(m)$. For example, if if $m_1,\ldots,m_n$ are all the distinct prime factors of $m$ (and so of $\text{rad}(m)$), and each of them is greater than $n$, then the Chinese Remainder Theorem is used in showing $j(m) \ge n+1$, and in fact for such $m$, $j(m) = n+1$. This can be generalized with a little bit of help: say $m$ and $r \gt 1 $ are such that $\sum_{1 \le i \le n} 1/m_i \lt (1 - 1/r)$. Then $j(m) \le rn$. Since it will introduce notation to be used later, I sketch a proof.
Proof sketch: For an interval $I$ having $l$ consecutive integers, and for a positive integer $t$ let $J_t$ be the number of (integer) multiples of $t$ inside $I$. Then $\text{ceil}(l/t) \ge J_t \ge \text{floor}(l/t)$. So the count of numbers in $I$ not coprime with $m$ is at most $n + \sum_{1 \le i \le n} l/m_i \lt n + l(1-1/r)$ . Now if $l \ge rn$, then the right hand side of the inequality is at least $l$, meaning at least 1 number in $I$ is coprime with $m$. End of Proof sketch.
One can do better, since for small primes $m_1$ and $m_2$ there may be a multiple of $m_1m_2$ inside the interval. Kanold, Jacobsthal, Erdos and others have done some improving in this area. However, I now show the result by Harlan Stevens.
Theorem (Stevens) $j(m) < 2n^{(2 + 2e\log(n))}$.
Improvement? (Paseman): For sufficiently large $n$, $j(m) \le O^*(n^{(2 +2e \log(\log(p_n)))})$.
The reasons for the $O^*$ will appear, as I will discuss Stevens's proof and the improvement together. In discussing the improvement, I will ask that $n \gt 30$.
Stevens starts out using (equivalents of) $I$, $l$, and $J_t$ as above. He uses inclusion-exclusion to compute $L$, the number of integers coprime to $m$ in the interval $I$. For ease, I assume $m$ is squarefree, and here $\mu(t)$ is the Moebius function, so $\mu(t)$ is (-1) to the power of the number of distinct prime divisors of (squarefree) $t$. Thus,
$L = \sum_{t \mid m} \mu(t)J_t$.
Now Stevens cites Landau to use what I think of as a Bonferroni inequality. Breaking the sum up by $\nu(t)$, the number of distinct prime factors of $t$, one has for odd values of $s$ the following:
$L \ge \sum_{0 \le i \le s} (-1)^i ( \sum_{t \mid m , \nu(t) = i} J_t) $.
Using the estimate $\mid J_t - l/t \mid \le 1$ (except for $t=1$, when $l=J_t$),
$L \ge lT_s - SB $, where $T_s = \sum_{0 \le i \le s} (-1)^i (\sum_{t \mid m , \nu(t) = i} 1/t)$. and $SB = \sum_{1 \le i \le s} \binom{n}{s}$
Now normally one finds $s$ so that $T_s$ and $SB$ can be well estimated, and then one says that for any $l > SB/T_s$, $L$ will be positive, giving that $SB/T_s$ (or the appropriate estimate) is an upper bound for $j(m)$. Stevens doesn't do that. He instead rewrites $T_s$ as $P - T$, finds estimates for $T',P',$ and $SB'$ so that $ SB'/(P' - T') \ge SB/T_s$, ensures that $P' > T'$, and then concludes that $j(m) \le SB'/(P' - T')$. I will do the same, except I will use more refined estimates.
Let's do $SB$. Stevens's replacement is $n^s$; mine is $\binom{n+1}{s}$ times a small fudge factor, which will be strictly less than $n^s$ for $s \gt 2$ and $n$ sufficiently larger than $2s$. The small fudge factor is from dominating the sum by a geometric series with common ratio $\binom{n+1}{s-2} / \binom{n+1}{s}$. This gives $1 + (s(s-1))/((n+2)(n+3-2s))$.
Now Stevens defines $P = \sum_{0 \le i \le n} (-1)^i \sum_{t \mid m , \nu(t) = i} 1/t$. As in the question above, this gives $P = \prod_{1 \le i \le n}(1 - 1/m_i)$ (Recall that the distinct prime factors of $m$ are the $m_i$.) Then $T = P - T_s$, so
$T = \sum_{s \lt i \le n} (-1)^i \sum_{t \mid m , \nu(t) = i} 1/t$ .
Now Stevens estimates $T$ by $T' > T$, first by replacing the inner sum (and throwing away $(-1)^i$), and then by having the index of the outer sum go past $n$. He then uses Taylor's theorem with remainder on $e^{h(n)}$ for a certain choice of $h(n)$ to come up with a compact term that bounds the infinite sum. Then $s$ is restricted to arrive at a $T'$ which bounds $T$ but is still small.
Here we go. For $n$ sufficiently large
$ (i!)\sum_{t \mid m , \nu(t) = i} 1/t \le (\sum_{1 \le j \le n} 1/m_j)^i \le h(n)^i$, so $T \lt \sum_{s \lt i \le n} ((\sum_{1 \le j \le n} 1/m_j)^i)/(i!)$,
so $T \lt \sum_{s \lt i } (h(n)^i)/(i!) \le e^{h(n)}(h(n)^{s+1})/((s+1)!)$.
Stevens chose $\log(n) > \sum_{1 \le j \le n} 1/p_j \ge \sum_{1 \le j \le n} 1/m_j$ for $n > 2$ to use as $h(n)$. I choose $\alpha_n + \log(\log(p_n))$ for $h(n)$, and will talk about the positive constant $\alpha_n < 1$ and the allowed values of $n$ later. (Thinking of $\alpha_n = 0.75$ is good.) Now let's get to $T'$.
If we replace $(s+1)!$ with the smaller $((s+1)/e)^{s+1}$ and choose $s$ so that $s+1 \gt 2eh(n)$, then $T' = e^{h(n)}2^{-s-1} \gt e^{h(n)}(h(n)^{s+1})/((s+1)!) \gt T$.
Also $P = \prod_{1 \le i \le n}(1 - 1/m_i) \gt \prod_{1 \le i \le n}(1 - 1/p_i) \gt 1/(\beta_n\log(p_n))$, so let $P' = 1/(\beta_n\log(p_n))$. Again the valid range of $n$ depends on the choice of $\beta_n$. For now choose them (say, $\beta_n = 3$) so that $P > P'$. (Stevens chose $1/n$ for $P'$.) Then $P- T \gt P' -T'$, and $SB' > SB$. We just need to find odd $s > 2eh(n) -1$ and ensure that $P' > T'$, and then combine everything. Stevens's form is similar to what is below; I will use my versions of $P'$ and $T'$. Writing $z$ for $\log(p_n)$,
$P' - T' = 1/(\beta_n z) - e^{\alpha_n}z(1/2)^{2e(\alpha_n + \log(z))}$.
Now $(1/2)^{2e\log(z)} = z^{-2e\log(2)}$. So
$P' -T' = (\beta_n z)^{-1} - (e/2^{2e})^{\alpha_n} z^{-2e\log(2) + 1} = (\beta_n z)^{-1} - ((e^{\alpha_n})z)^{-2e\log(2) + 1}$
$ = z^{-1} ( (\beta_n)^{-1} - (e^{\alpha_n})^{-2e\log(2) + 1} z^{-2e\log(2) + 2} ) $
Since we will have $n$ large enough so that $\beta_n \lt (e^{\alpha_n})^{2e\log(2) - 1}$, and since $z^{2e\log(2) - 2}\gt 1$, we get that $P' - T' > 0$ .
Now to put it all together. If $s$ is odd and $s+1 \gt 2e(\alpha_n + \log(\log(p_n)))$ then
$ \binom{n+1}{s} ( 1 +(s(s-1))/((n+2)(n+3 -2s)) ) (z/[(\beta_n)^{-1} - (e^\alpha_n)^{-2e\log(2) + 1} z^{-2e\log(2) + 2}]) \gt SB/(P-T)$ and this gives a value for $l$ which in turn gives $L > 0$. Since for the improvement I assume $n > 30,$ the fudge factor is at most 18/11, and when $\beta_n=3$ and $\alpha_n = 0.75$, the denominator of the last fraction goes from some value above 1/5 to 1/3 as $n$ increases. So the whole expression is bounded by $9 \binom{n+1}{s} z$, or $9 \binom{n+1}{s} \log{p_n}$. Since $\log(\log(p_n)) $ is increasing, the whole ball of wax is bounded by $n^{2e(\log\log(p_n) + \alpha_n) +1}\log(p_n)$.
Now as to the choice of $\alpha_n$ and $\beta_n$. They were chosen generously: Mertens showed that $\sum_{1 \le i \le n} 1/p_i \lt \log\log(p_n) + B + \delta$, where $B$ is a constant close to 0.26 and $\delta$ is an error bounded in size by a sum of two terms, the largest of which is $4/\log(p_n +1)$. So using Merten's estimate, $p_n + 1$ should be bigger than (some number not much larger than) $e^8$. Similarly, Mertens has an estimate on $\prod_{1 \le i \le n} (1 - 1/p_i)$, which is $e^{-(\gamma + \delta)}/\log(p_n)$, where $e^{-\gamma}$ is close to 1/1.78 and $\delta$ is bounded by a sum of three terms, the largest of which is again $4/\log(p_n+1)$, again requiring that $p_n + 1$ be bigger than (something close to) $e^8$. However, computations seem to show that the constants chosen seem to allow the rquired estimates to hold for $n \gt 30$ and some $n$ smaller than 30. The major block is on $n \gt 2s \gt 2eh(n) - 1$.
I welcome any constructive input and error checking. I will revise this in the coming weeks.
Gerhard "Almost Ready To Shelve This" Paseman, 2011.02.25