The function takes around \$41\mu s\$ on average per run on my machine. About three quarters of it (around \$ 32\mu s\$) are spent for downsampling_indices = np.linspace(...). Add another \$1.5\mu s\$ for round().astype(int), about \$1\mu s\$ for the actual sampling, plus some calling overhead, and you're there.
So if you would need to use the function several times, it would be best to precompute or cache/memoize sampling indices. If I understood your implementation correctly, the downsampling index computation is basically data independent and only depends on the length of the two sequences, so that might be actually viable.
For example you could have
import functools ... @functools.lru_cache() def compute_downsampling_indices_cached(n_samples, data_sequence_len): """Compute n_samples downsampling indices for data sequences of a given length""" return np.linspace(0, data_sequence_len-1, n_samples).round().astype(int)
and then do
def resample_cache(n_samples, data_sequence): downsampling_indices = compute_downsampling_indices_cached(n_samples, len(data_sequence)) return [data_sequence[ind] for ind in downsampling_indices]
Note that I replaced desired_time_sequence by n_samples which would then have to be set to len(desired_time_sequence) since you don't care about the actual values in desired_time_sequence.
It might also be possible to benefit from NumPy's indexing and use return np.array(data_sequence)[downsampling_indices] for larger inputs. You will have to check that yourself.
On my machine resample_cache(...) takes \$1.7\mu s\$, which is about a decent 20x speed up.