Given basic elements of a neuron(as below) with a bias value:
I learnt that, a bias value allows you to shift the activation function(say sigmoid function) to the left or right, which may be critical for successful learning. Yet to understand, why?
Mathematically, when we change an argument of a function(say sigmoid function in this case) by adding or subtracting a constant, then the output values shift to left or right accordingly, as shown below:
My understanding is that, binary classification problems(example - image recognition), use neural network based AI solutions. This is the reason, a neuron runs an activation function like sigmoid function that takes a summation of all inputs(with weights) and provide a output value within interval [0,1]
Question:
1) Given any argument(input) to a function(sigmoid), how shifting of output values(left or right) using bias critical to successful learning(training) in artificial neural network?
2) Mathematically, a function must have one domain and one range. For successful learning, why sigmoid function(single variable function) use linear combiner(Σ) for all the inputs(synapsesXparameters)? Because summation of synapsesXparameters may not provide uniqueness, if parameters(weights) have negative value

