The sklearn library has a gradient descent implementation of a support vector classifier and regressor.
The classifier is SGDClassifier(loss="hinge", ...), and the regressor is SGDRegressor(loss="epsilon_insensitive", ...)).
There are other options for loss= that are SVM-related. You can also choose a loss= that yields a different model, like logistic regression. The commonality is that all configurations use gradient descent.
LinearSVC() and LinearSVR are other options available in sklearn for linear SVMs that scale efficiently.
sklearn also has kernel approximations that you can prepend onto these models to get some of the nonlinear behaviour imparted by kernel functions.
This answer has a few more details about speeding things up. The sklearn page also has usage tips.
For large and complex datasets I think there's more of a tendency to use other models, like random forests.