I am trying to compute the standard error of the sample spectral risk measure, which is used as a metric for portfolio risk. Briefly, a sample spectral risk measure is defined as $q = \sum_i w_i x_{(i)}$, where $x_{(i)}$ are the sample order statistics, and $w_i$ is a sequence of monotonically non-increasing non-negative weights that sum to $1$. I would like to compute the standard error of $q$ (preferrably not via bootstrap). I don't know much about L-estimators, but it looks to me like $q$ is a kind of L-estimator (but with extra restrictions imposed on the weights $w_i$), so this should probably be an easily solved problem.
edit: per @srikant's question, I should note that the weights $w_i$ are chosen a priori by the user, and should be considered independent from the samples $x$.