I am optimizing some loss function using Gradient Descent method. I am trying it with different learning rates, but the objective function's value is converging to same exact point.
Does this means that I am getting stuck in a local minima?, because the loss function is non-convex so it is less likely that I would converge to a global minima.
