Let $f:\mathbb R^d \rightarrow \mathbb R$ be a convex function and $A \subset \mathbb R^d$ a convex set. We are interested in finding the minimum of $f$ over $A$. We have the gradient of $f$ and we know that the global unique minimum of $f$ is outside of $A$.
Suppose we decide to do gradient descent to find that minimum. We start in a point of $A$ and follow the opposite of the gradient at each step (while decreasing the learning rate). At some point, this will take us outside of $A$ since the global minimum is outside of $A$. What modification to gradient descent must we apply to guarantee that we stay in $A$ while converging to the minimum in $A$?
What I tried: every time we go outside of $A$, we project that point back on $A$ using the Euclidean norm, and continue the algorithm from that point. Unfortunately, this seems to not converge to the minimum in $A$, but instead to the projection of the global minimum onto $A$. This is not necessarily the minimum in $A$.