There's no "loss function" for hard-margin SVMs, but when we're solving soft-margin SVMs, it turns out the loss exists.
Now is the detailed explanation:
When we talk about loss function, what we really mean is a training objective that we want to minimize.
In hard-margin SVM setting, the "objective" is to maximize the geometric margin s.t each training example lies outside the separating hyperplane, i.e. $$\begin{aligned} & \max_{\gamma, w, b}\frac{1}{\Vert w \Vert} \\ &s.t\quad y(w^Tx+b) \ge 1 \end{aligned} $$ Note that this is a quadratic programming problem, so we cannot solve it numerically using direct gradient descent approach, that is, there is no analytic "loss function" for hard-margin SVMs.
However, in soft-margin SVM setting, we add a slack variable to allow our SVM to made mistakes. We now try to solve $$\begin{aligned} & \min_{w,b,\boldsymbol{\xi}}\frac{1}{2}\Vert w \Vert_2^2 + C\sum \xi_i \\ s.t\quad &y_i(w^Tx_i+b) \ge 1-\xi_i \\ & \boldsymbol{\xi} \succeq \mathbf{0} \end{aligned} $$ This is the same as we try to penalize the misclassified training example $x_i$ by adding $C\xi_i$ to our objective to be minimized. Recall hinge loss: $$ \ell_{\mbox{hinge}}(z) = \max\{0, 1-z\}, $$ since if the training example lies outside the margin $\xi_i$ will be zero and it will only be nonzero when training example falls into margin region, and since hinge loss is always nonnegative, it happens we can rephrase our problem as $$ \min \frac{1}{2}\Vert w \Vert_2^2 + C\sum\ell_{\mbox{hinge}}(y_i(w^Tx_i)). $$ We know that hinge loss is convex and its derivative is known, thus we can solve for soft-margin SVM directly by gradient descent.
So the slack variable is just hinge loss in disguise, and the property of hinge loss happens to wrap up our optimization constraints (i.e. nonnegativity and activates input when it's less than 1).