Given basic elements of a neuron(as below) with a bias:

[![neuron structure][1]][1]

 [1]: https://i.sstatic.net/KPDLCRyG.png

-------

I read that, a bias value allows you to shift the activation function(say **sigmoid function**) to the left or right, which may be **critical for successful learning**. Yet to understand, why?


-----

Mathematically, when we change an argument of a function(say sigmoid function in this case) by adding or subtracting a constant, then the output values shift to left or right accordingly, as shown below:


[![sigmoid function][2]][2]

[2]: https://i.sstatic.net/wcaUpgY8.png

------

My understanding is that, zero-one classification based problems(example - image recognition), use neural network based AI solutions. This is the reason, a neuron runs an activation function like sigmoid function that takes a summation of all inputs(with weights) and provide a output value within interval [0,1]

**Question:**

1)
 Given any argument(input) to a function(sigmoid), how shifting of output values(left or right) using bias **critical to** successful learning(training) in artificial neural network?

2) 
 Mathematically, a function must have one domain and one range, but, for successful learning, Why sigmoid function is considered as single variable function by summing up(linear combiner - Σ) all the inputs(synapsesXparameters)? Instead,