Skip to main content

Questions tagged [sigmoid]

For questions about the sigmoid functions (in particular, the logistic functions) and the consequences of using them as activation functions in neural networks.

2 votes
2 answers
131 views

In which types of learning tasks are linear units more useful than sigmoid activation functions in the output layer of a multi-layer neural network?
DSPinfinity's user avatar
  • 1,223
1 vote
3 answers
216 views

Performing -ln(ε) in NumPy returns relatively small values like this: ...
Muhammad Ikhwan Perwira's user avatar
2 votes
2 answers
184 views

Cybenko showed that if $\sigma$ is a sigmoidal, continuous function, then for any $\varepsilon > 0$, for any continuous function $f: [0, 1]^d \to \mathbb{R}$, there exists a function of the form $g:...
JackEight's user avatar
  • 123
0 votes
0 answers
63 views

Attached image. How would you find the relationship between independent variable (x) and dependent variable (y)? Is it linear or non-linear? What would the function looks like? P.S. I believe this is ...
DLCVIP007's user avatar
2 votes
1 answer
108 views

I use a Keras EfficientNetB7 and transfer learning to solve a binary classification problem. I use tf.keras.layers.Dense(1, activation="sigmoid")(x) for ...
Doug's user avatar
  • 125
1 vote
1 answer
130 views

How to create a model that can give an output with a range of 0 to 1 with a sigmoid activation function where the value closer to 0 means the lesser chance that the input number is not prime and the ...
Muhammad Ikhwan Perwira's user avatar
7 votes
4 answers
2k views

Within the Sigmoid Squishification function, f(x) = 1/(1 + e^(-x)) "e" is unnecessary, as it can be replaced by any other value that is not ...
Jake's user avatar
  • 181
0 votes
1 answer
75 views

I understand that to solve multilabel classification problems, we can use the softmax activation function in the output layer of the neural network. The softmax function outputs probabilities of each ...
Dawood Ahmad's user avatar
0 votes
1 answer
129 views

SigmoidBinaryCrossEntropyLoss implementation in DJL accepts two kinds of outputs from NNs: where sigmoid activation has already been applied. where raw NN output ...
src091's user avatar
  • 1
0 votes
1 answer
146 views

I have been reading Michael Nielsen’s book online on his website at http://neuralnetworksanddeeplearning.com/chap1.html. I am struggling to understand the second exercise: When c approaches infinity, ...
QuantNoob's user avatar
0 votes
2 answers
209 views

I am writing a Neural Network frorm scratch. Below is what I have right now, based off of the math that I think I understand. ...
user avatar
1 vote
1 answer
426 views

The creation of negative rewards leads to the chance of Q-values being negative. However, networks with relu or sigmoid activations, just cannot predict negative values. This will lead to a case where ...
desert_ranger's user avatar
0 votes
1 answer
857 views

I am doing binary classification using an LSTM and my output layer is 1 neuron with a sigmoid function. My labels are either 0 or 1. ...
Allen Ye's user avatar
0 votes
2 answers
2k views

I have a textual dataset that has a set of real numbers as labels: L={0.0, 0.33, 0.5, 0.75, 1.0}, and I have a model that takes the text as input and has a Sigmoid output. If I train the model on this ...
Minions's user avatar
  • 123
3 votes
3 answers
6k views

CONTEXT I was wondering why there are sigmoid and tanh activation functions in an LSTM cell. My intuition was based on the flow of tanh(x)*sigmoid(x) and the ...
MASTER OF CODE's user avatar

15 30 50 per page