I can give you an example of ana corner case that could never occur that caused a disaster.
When the Ariane 4 was being developed the values from the lateral accelerometers were scaled to fit into a 16-bit signed integer and because the maximum possible output of the accelerometers, when scaled, could never exceed exceed 32767 and the minimum could never fall below -32768 there was “no need for the overhead of range checking”. In general all inputs are supposed to be range checked before any conversion, but in this case that would be trying to catch an impossible corner case.
Several years later the Ariane 5 was being developed and the code for scaling the lateral accelerometers was reused with minimal testing as it was “proven in use”. Unfortunately the larger rocket could expect larger lateral accelerations so the accelerometers were upgraded and could produce larger 64-bit float values.
These larger values "wrapped" in the conversion code, remember no range checking, and the results on the first launch in 1996 weren't good. It cost the company millions and caused a major hiatus in the program.
The point that I am trying to make is that you should not ignore test cases as never happening, extremely unlikely, etc.: the standards that they were coding for called for range checking of all external input values. If that had been tested for and handled then the disaster might have been averted.
Note that in Ariane 4 this was not a bug, (as everything worked well for every possible value) - it was a failure to follow standards. When the code was reused in a different scenario it failed catastrophically, while if the range of values had been clipped it would likely have failed gracefully, and the existence of a test case for this might have triggered a review of the values. It is also worth noting that, while the coders and testers came in for some criticism from the investigators following the explosion, the management, QA & leadership were left with the majority of the blame.
Clarification
While not all software is safety critical, nor so spectacular when it fails, my intention was to highlight that "impossible" tests can still have value. This is the most dramatic case that I know of but robotics can also produce some disastrous outcomes.
Personally, I would say that once someone has highlighted a corner case to the test team a test should be put in place to check it. The implementation team lead or project manager may decide to not try to address any failures found but should be aware that any shortcomings exist. Alternatively, if the testing is too complex, or expensive, to implement, the an issue can be raised in whatever tracker is in use &/or the risk register to make it clear that this is an untested case - then— that it may need to be addressed before a change of use or prevent an inappropriate use.
