Neither quite right nor quite wrong in my view.
Standardization doesn't do much harm, but I don't think it is essential for any purpose. Historically, it was often important to have values of different predictors on similar levels for computational reasons, but decent software usually now takes care of that for you. It remains true that predictors measured in very different units and/or having different magnitudes can have coefficients that may be very large or very small, which can be a little awkward in reporting, so multiplying up or down can help (e.g. changing km to m or vice versa). That's not fundamental, however.
But your main concern here appears to be judging relative importance, which is a dubious business at best. The predictors act as a team and their relative importance can't be accurately disentangled, especially whenever as is typical there are some moderate or strong correlations between them. That said, most data analysts are tempted to have a look and it can be true that t statistics help this task informally. So while standardization does no harm, the t statistics do what you want as well as anything does, regardless of whether a predictor has been transformed beforehand. That is because the t statistics given for the predictors are dimensionless.
Some researchers insist that choice of predictors be made in advance on substantive or theoretical grounds. In practice many people modify initial models when they see that some predictors appear relatively unimportant. The more you do that, the more obligation there is to report such activity openly and not to treat the final model as that you thought of to begin with.
Many of these matters are partly tribal. If people in your corner of science report standardized, then follow suit.