A power calculation is really just another way of showing that a study can answer the question it was set up to answer to a desirable degree of reliability. In that sense, if you go in with a specific question, it always makes sense to assess this.
That said, I don't think you need a "power analysis" as much as a sample size estimation -- but the two are closely linked. The way I read the plan is that you want to distinguish between $H_0:\pi\le0.5$ versus $H_A:\pi\ge0.7$ with $\pi$ the probability of completing treatment. You'll need to set acceptable rates of type I and type II assertions (respectively deciding in favour of $H_A$ when in truth $H_0$ and vice versa) which I'll fix at $\alpha=0.05$ (one-sided) and $\beta=0.20$.
The question then becomes: what probability do I have for observing a certain proportion of completers under a given hypothesis? This can be derived from the binomial distribution which I'll calculate using R:
## Range of sample sizes under consideration ns <- 10:100 ## 1 - alpha probability of observing this many completers or less under H0 y0 <- qbinom(.95, ns, .50) ## 1 - beta probability of observing this many completers or more under HA ya <- qbinom(.20, ns, .70)
Plotting both of these bounds: 
At these operating characteristics you don't want to go in with, say, $n<20$ because there is a range of observations that is compatible with both hypotheses (in the plot: the red line lies above the green line). The ideal size is where you can demarcate an observation that is unlikely under $H_0$ (with probability $1-\alpha$) and one that is likely under $H_A$ (with probability $1-\beta$):
## Where does the red line go below the green line? i <- which.max(y0 < ya) ## Sample size ns[i] #> 37 ## Threshold for accepting H0 y0[i] #> 23 ## Threshold for accepting HA ya[i] #> 24
An efficient design (for these hypotheses, $\alpha$, and $\beta$!) is to use $n=37$, and to accept $H_0$ when you observe $x\le23$ completers, or $H_A$ when you observe $x\ge24$ instead.
As a sanity check this is easily confirmed empirically:
set.seed(1) mean(rbinom(1E6, 37, .5) > 23) #> 0.049129 mean(rbinom(1E6, 37, .7) < 24) #> 0.192474
Your referenced study mentioned a "Simon's two-stage design": this adds an early stop for $H_0$. Details are in his paper.
I'll say first that to specify these designs you should really use a more optimized routine such as clinfun::ph2simon or the calculator provided in the comments: this requires numerical evaluation of a sizeable range of binomial (cumulative) densities to identify the combinations of sample sizes and completers in both stages that lead to the desired operating characteristics while minimizing the expected or total sample size. Still, for fun (learning is fun!) here is a very naïve R function that gives the same result:
## Calculate Eqn 1 from Simon 1989 bprob <- \(r1, n1, r, n, p) { x <- if ((r1+1) <= min(n1,r)) seq(r1+1, min(n1,r)) else numeric(0) pbinom(r1, n1, p) + sum(dbinom(x, n1, p) * pbinom(r-x, n-n1, p)) } ## Derive Simon two-stage design. Be warned: not fast. simon <- function(h0, ha, nrange, alpha=0.05, beta=0.20) { rv <- matrix(NA, ncol=6, nrow=length(nrange)) for (i in seq_along(nrange)) { ben <- n <- nrange[i] for (n1 in (1:(n-1))) { for (r1 in (0:n1)) { for (r in (r1:n)) { p0 <- bprob(r1, n1, r, n, h0) pa <- bprob(r1, n1, r, n, ha) if (p0 > (1-alpha) & pa < beta) { pet <- pbinom(r1, n1, h0) en <- n1 + (1-pet) * (n-n1) if (en < ben) { ben <- en rv[i,] <- c(r1, n1, r, n, pet, en) } } } } } } ## Discard non-solutions -- next lines will assume solutions exist! rv <- rv[apply(rv, 1, \(.) !all(is.na(.))), ] ## Minimax: minimize total sample size ## Optimal: minimize expected sample size under H0 (= EN) rv <- rv[c(which.max(rv[,4] == min(rv[,4])), which.max(rv[,6] == min(rv[,6]))),] dimnames(rv) <- list(c("Minimax", "Optimal"), c("r1", "n1", "r", "n", "PET", "EN")) return(rv) }
You'll have to provide an educated guess for the sample size range -- or wait for the code to crawl through a larger nrange -- but luckily we've already derived the total sample size (and final acceptance threshold) for the "Minimax" solution above:
simon(h0=.50, ha=.70, nrange=35:45) #> r1 n1 r n PET EN #> Minimax 12 23 23 37 0.6611803 27.74348 #> Optimal 8 15 26 43 0.6963806 23.50134 ## Compare versus a proper implementation clinfun::ph2simon(.50, .70, .05, .20) #> r1 n1 r n EN(p0) PET(p0) #> Minimax 12 23 23 37 27.74 0.6612 #> Optimal 8 15 26 43 23.50 0.6964
I'll just note here that using the above you can identify that the study you referenced was not in fact an optimal design... nor would observing $12/17$ completers lead to rejection of their $H_0$!