0
$\begingroup$

In modern embedded signal processing, where we often work with buffered signals and have access to past and future data, I’m curious about the continued emphasis on designing causal FIR filters. Given that non-causal FIR filters could potentially offer improved performance by utilizing future data, why do we still prioritize causal FIR filters for most practical applications? Are there specific reasons or constraints that make causal FIR filters still more advantageous despite the availability of buffered signals? as in modern FPGAs or GPUs we can easily implement non-causal FIR filters as well. I will appreciate insights into any trade-offs or considerations that influence this choice. Hope this is not a redundant question..

Edit after a couple of potential answers: My assumption is that when using a non-causal FIR filter I can have better performance (sharper cutoffs, high attenuations etc.) than a causal FIR filter given the same filter length

$\endgroup$
5
  • 2
    $\begingroup$ The question appears to be based on a misunderstanding. Strictly speaking you cannot implement a causal filter since you can't see int the future. The buffering you describe simply adds delay, which makes the filter causal (and hence implementable). The vast majority of FIR design tools do exactly that so it's absolutely standard practice. It makes no difference if you add the delay during the filter design or as part of the buffering. In conclusion: you can't implement a zero phase filter, but you can delay it and you end up with a linear phase filter. Otherwise it's the same. $\endgroup$ Commented Sep 26, 2024 at 12:46
  • 1
    $\begingroup$ I think you, @Hilmar meant to say: "Strictly speaking you cannot implement an acausal filter since you can't see into the future. " $\endgroup$ Commented Sep 26, 2024 at 14:16
  • $\begingroup$ @Hilmar Sorry for any misunderstanding, what I meant to say was that in case of causal FIR filter we need current and all past samples, lets say 128 order, now in case of buffered signal what I have is a current sample in midway, future and past values, so in this case given the order was same will the "non-causal" filter have any better performance then its causal counterpart or will the response be same, if it is better then why isn't it used more often than the causal one, yes I understand strictly speaking its not non-causal $\endgroup$ Commented Sep 26, 2024 at 18:02
  • $\begingroup$ @malik12: My point here is: what you are describing is standard practice. Every linear phase filter is implemented this way, It's used all the time. Your question of "why don't we use it more often" is a false premise. It IS used very often already. Most of FIR filter design is done in the "non-causal" domain and you just add delay to make it implementable. $\endgroup$ Commented Sep 26, 2024 at 22:23
  • $\begingroup$ @hilmar Ah, as the impulse response is the same in either case, there would be essentially no difference in the frequency response, the difference is just a delay in impulse response. My initial thought that having access to the "future" samples in case of no delay would improve the response is totally wrong. Sorry for wasting everyone's time. If this is worthy of an answer post this as such and I will accept and close if not then ill delete the question altogether. Thank you $\endgroup$ Commented Sep 27, 2024 at 9:19

3 Answers 3

2
$\begingroup$

Answer based on the comments

The questions seems to be based on a misconception. Most FIR designs are already based on non-causal filters. In order the be able to actually implement the filters delay must be added. If that's done through buffering, through shifting coefficients or with an actual delay block makes no difference.

The delay just adds a linear phase but the magnitude of the transfer function stays the same.

My assumption is that when using a non-causal FIR filter I can have better performance (sharper cutoffs, high attenuations etc.) than a causal FIR filter given the same filter length

That assumption is incorrect since most FIR designs already assume non-causal filters.

$\endgroup$
0
$\begingroup$

But you don't have access to future data in real-time signal processing. Future data has not been acquired yet because it will be acquired in the future.

If you have buffered data, you are processing past data up until this moment. A DSP might process audio data in "real-time" but it might process it in blocks of certain size when the block has been received, so it can only process fully received blocks, not the block that is being currently received, so there will be a delay.

If you have data that was previously captured and saved e.g. into a file (recording of sound, or photograph, or a video clip) you have all the data you can access in any way you want to process it, but that's no longer real-time.

$\endgroup$
2
  • $\begingroup$ Let me clarify, When I say I have buffered the data so in that sense the "future" samples are already recorded in my buffer. In most modern FPGAs or DSP we have large enough memory and fast enough execution speed to meet the real time response requirements. $\endgroup$ Commented Sep 26, 2024 at 8:44
  • $\begingroup$ @malik12 See, no matter how much you have buffered and how fast you can process it, you can only process it up to the point where you have no more future samples in the buffer. Which is exactly same as processing all the samples from the buffer with a FIR filter that does not need any future samples. $\endgroup$ Commented Sep 26, 2024 at 8:51
0
$\begingroup$

When processing buffered data by a FIR (or IIR) filter, you are "missing the opportunity" of acting upon "future information"*) when you are processing the first sample in the buffer, but not when you are processing the last sample.

Certain operations might potentially work "better" if you redesign them to look at the buffer as a whole before starting writing the first sample. But this will also introduce a time-variant behaviour that may make it harder to design and comprehend.

*)Not really "future" as in Marty McFly, but future in that we have knowledge about sample x(n+1) when we are processing sample x(n).

$\endgroup$
3
  • $\begingroup$ Can you please elaborate why the time-variant behavior will occur as given my understanding if the coefficients are fixed the behavior for even the non-causal filter would be time-invariant $\endgroup$ Commented Sep 26, 2024 at 11:04
  • $\begingroup$ I am not talking about LTI filtering (which obviously is time invariant). I am talking about a general processing function, which (if you do not introduce additional delay aside from the buffering), would have a look-ahead of "buffersize" for the first sample in the buffer, and "0" for the last sample in the buffer. If you plan on exploiting that lookahead for something useful, you will introduce time-variant behaviour $\endgroup$ Commented Sep 26, 2024 at 11:32
  • $\begingroup$ A FIR filter is in practice just a set of weights applied to recent history and summed. If you keep a history of N samples, you can let those N weights have any value you desire. If implemented on a real-time buffered pipeline, that does not "buy" you anything in terms of latency as the newest output sample for each frame will still have to be that same weighted sum of the last N samples of the input buffer? $\endgroup$ Commented Sep 26, 2024 at 11:35

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.