Use logistic regression with a Gaussian link.
The count of simultaneous responses for a given value of $x$, written $y(x)$, is the outcome of $n=100$ independent Bernoulli trials whose chance of success is given by the Gaussian function. Letting $\theta$ stand for the three parameters (unknown, to be estimated), let's write the value of that Gaussian at $x$ as $\mu(x, \theta)$. Then the probability of observing $y(x)$ successes, with $0 \le y(x) \le n$, is
$${\Pr}_\theta(y(x)) = \binom{n}{y(x)} \mu(x,\theta)^{y(x)} \left(1-\mu(x,\theta)\right)^{n-y(x)}.$$
The likelihood of a set of independent observations arising from the same underlying Gaussian is the product of these expressions. The part of the product that varies with $\theta$ is obtained from the last two factors. They will tend to be extremely small (because we are dropping the binomial coefficients), so about the only reasonable way to handle them on a computer is through their logarithms. Thus the part of the log likelihood that varies with $\theta$ is
$$\Lambda(\theta) = \sum_{x}\left(y(x)\log(\mu(x,\theta)) + (n-y(x))\log(1-\mu(x,\theta))\right).$$
Logistic regression with a Gaussian link maximizes this log likelihood.
To make sure the Gaussian peak does not exceed $1$, we might choose to parameterize its amplitude using a function that ranges from $0$ to $1$. Here is one convenient parameterization:
$$\mu(x, \theta) = \mu(x, (m,s,a)) = \frac{\exp(-\frac{1}{2} \left(\frac{x-m}{s}\right)^2)}{1 + \exp(-a)}.$$
The parameter $m$ is the mode of the Gaussian, $s$ (a positive number) is its spread, and $a$ (some real number) determines the amplitude, increasing with increasing $a$.
Use a multivariate nonlinear optimization procedure suitable for smooth functions. (Avoid specifying any constraints at all by using $s^2$ instead of $s$ as a parameter, if necessary.) Given some vaguely reasonable estimates of the parameters, it should have no trouble finding the global optimum.
As an example, here is a detailed implementation of the fitting procedure in R using data from the question. It is modified from code for a four-parameter least-squares fit of a Gaussian shown in an answer at Linear regression best polynomial (or better approach to use)?.

The fit is good: the standardized residuals do not become extreme and given the small amount of data, they are reasonably close to zero.

The estimated values are $\hat{m} = -0.033$, $\hat{s} = 0.127$, and $\hat{a} = 16.8$ (whence the estimated amplitude is $1/(1+\exp(-\hat{a})) = 1 - 0.00000005$).
y <- c(10,45,90,100,60,10,5) # Counts of successes at each `x`. N <- length(y) n <- rep(100, N) # Numbers of trials at each `x`. x <- seq(-3,3,1)/10 # # Define a Gaussian function (of three parameters {m,s,a}). # gaussian <- function(x, theta) { m <- theta[1]; s <- theta[2]; a <- theta[3]; exp(-0.5*((x-m)/s)^2) / (1 + exp(-a)) } # # Compute an unnormalized log likelihood (negated). # `y` are the observed counts, # `n` are the trials, # `x` are the independent values, # `theta` is the parameter. # likelihood.log <- function(theta, y, n, x) { p <- gaussian(x, theta) -sum(y*log(p) + (n-y)*log(1-p)) } # # Estimate some starting values. # m.0 <- x[which.max(y)]; s.0 <- (max(x)-min(x))/4; a.0 <- 0 theta <- c(m.0, s.0, a.0) # # Do the fit. # fit <- nlm(likelihood.log, theta, y, n, x) # # Plot the results. # par(mfrow=c(1,1)) plot(c(min(x),max(x)), c(0,1), main="Data", type="n", xlab="x", ylab="y") curve(gaussian(x, fit$estimate), add=TRUE, col="Red", lwd=2) #$ points(x, y/n, pch=19) # # Compute residuals. # mu.hat <- gaussian(x, fit$estimate) residuals <- (y - n*mu.hat) / sqrt(n * mu.hat * (1-mu.hat)) boxplot(residuals, horizontal=TRUE, xlab="Standardized residuals") # # (Compute the variance-covariance matrix in terms of the Hessian # of the log likelihood at the estimates, etc., etc.)
(Although R has a way of performing these calculations automatically (by means of a custom link function for its glm Generalized Linear Model function), the interface is so cumbersome and so uninstructive I have elected not to use it for this illustration.)
Rcode). Least squares fits are maximum likelihood estimates for Normally-distributed residuals. As Nick Cox points out, your residuals do not have such distributions. $\endgroup$