5
$\begingroup$

I've read that upsampling performed in digital music playback can color the sound, produce artifacts, etc. For example, an audio file ripped from a CD might be 44.1Khz/16-bit, and then upconverted to 48Khz/16-bit and played via an optical digital audio output. Audiophiles say this is bad because the signal needs to be "bit perfect" to be reproduced correctly.

Is this correct? My vague knowledge of DSP from grad school leads me to think that all upsampling should do is increase the bandwidth of the signal. But, since the source signal is bandlimited, there shouldn't be anything new in the added bandwidth. And I don't see why the process would color the sound.

What am I missing?

$\endgroup$
5
  • $\begingroup$ What conversion tool did you have in mind? A good result requires a good conversion tool, but if it's done right you'll get all the fidelity you need. If an audiophile can tell the difference, then there's a problem with the audio system, not the data. $\endgroup$ Commented Apr 30, 2014 at 15:57
  • $\begingroup$ I guess the imperfections they refer to come from the filtering involved in the upsampling procedure. However, these artifacts can be made very small if complexity is no issue. $\endgroup$ Commented Apr 30, 2014 at 16:30
  • $\begingroup$ So, the specific application is streaming audio via an Apple TV. [network source]-->Apple TV-->Optical audio out-->DAC. Apparently, everything going in to or out of the Apple TV converts to 48Khz/16-bit. I could understand the DAC coloring the sound, but assuming nothing on the digital path is lossy, I don't understand why an ideally-implemented upsampling stage would degrade the signal. $\endgroup$ Commented Apr 30, 2014 at 17:01
  • 2
    $\begingroup$ There are no ideal digital filters (that are stable and causal and that can be implemented with finite complexity). They will introduce amplitude and phase distortions. These distortions can be made small by increasing the complexity of the filters. Check this link: en.wikipedia.org/wiki/Sample_rate_conversion $\endgroup$ Commented Apr 30, 2014 at 20:45
  • 3
    $\begingroup$ If you specify the degree of "degradation" to as small of an epsilon as you want, the interpolation that computes the new samples in between can accomplish that as long as you're willing to pay for it with computational effort. if your interpolation looks at 64 samples (32 before and 32 after your interpolated sample), no one, including dogs, can hear any degradation. you can implement a pretty damn good brick-wall polyphase LPF with 64 samples. $\endgroup$ Commented Apr 30, 2014 at 21:57

2 Answers 2

6
$\begingroup$

The up-sampling process will always change the signal in some measurable way. However, if it's done properly the changes are negligible and don't result it any audible difference. Most commercially sample rate converters (hardware or software implementations), do a really good job at this.

Off course, if done badly, upsampling can result in clearly audible signal degradation. I'm not familiar with Apple's implementation but I would assume that they got this correctly.

$\endgroup$
-3
$\begingroup$

When you take a specific sampling rate---such as on a cd at 44.1/16...and you upsample the music to a higher sampling rate, YOU ARE NOT ADDING ANY FURTHER INFORMATION TO THAT HIGHER SAMPLING RATE THAT WASN'T ALREADY ON THE ORIGINAL SAMPLING--IN THIS CASE 44.1/16.

SO, WHAT IS HAPPENING IS THAT YOU ARE STRETCHING THE MUSIC OUT TO FIT THE NEW SAMPLING RATE. THIS IS BAD!! WHY?...BECAUSE YOU ARE "STRETCHING" OR "DISTORTING" THE NOTES--THE MUSIC--TO AN ARTIFICIAL SOUND SIGNATURE. EVERY MUSICAL NOTE HAS SEVERAL HARMONIC ENVELOPES AROUND IT. WHEN YOU "STRETCH' OR DISTORT THE ORIGINAL NOTE WITH IT'S SEVERAL HARMONIC ENVELOPES, YOU ARE DISTORTING THE HARMONICS OF THE ORIGINAL MUSIC. SURE, people are going to listen to the oversampling and say, "Wow, there is so much better resolution!" That is because there is further space between the notes when they are stretched out...but the subtle but important HARMONIC structure of the music...it's the HARMONICS in music that give it the beautiful tonal musical sound...are compromised. So to heck with Tidal deciding to force me as a customer to take their new 1 tier program with it's HIGHER RESOLUTION' MUSIC. If the ORIGINAL recording IS SAMPLED AT A HIGHER SAMPLING RATE, THAT IS PERFECTLY OK..THEN YOU ARE GETTING TRUE HIGH RES WITHOUT HARMONIC DISTORTION. But how many songs on your list have originally been recorded at higher than the 44/16 Sampling rate?? I don't know for sure the answer to that...but my guess would be that most of the songs I am listening to on these streaming services were likely sampled originally at the 44.1/16 rate. and the streaming services are artificially introducing the "HI RES" label on everything by oversampling the original... and to my ears anyway ruing the "musicality" of the material.

$\endgroup$
3
  • 3
    $\begingroup$ The ALL CAPS give it away, but this answer is worthless. Maybe even worth less than zero. $\endgroup$ Commented Mar 18, 2024 at 3:35
  • $\begingroup$ John, are you confusing interpolation and time stretching? $\endgroup$ Commented Mar 18, 2024 at 4:07
  • $\begingroup$ This answer is wrong and full of audiophile beliefs. In real life, we have oversampling ADC and DAC converters for audio, that transfer the analog world problems to digital domain, and thus systems with these chips handle the underlying audio better than old systems with non-oversampling chips and expensive analog filters required. Converting sampling rate cannot affect to what you are hearing and cannot be measured with engineering equipment, unless done poorly to begin with. I think you can apply for the official HI-RES label if the device and system is compatible with up to 40 kHz. $\endgroup$ Commented Mar 19, 2024 at 6:15

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.