So the implicit question seems to be: How the heck is this IO layer business supposed to be used properly? Or perhaps: Is there a bug in the implementation of the :crlf layer?
After 13 years, there is no clarity. But the short answer is that :crlf seems to be meant to be applied – somewhat counter-intuitively – after any :encoding(…) layer and not before as doing otherwise will result in garbled output for UTF-16/UCS-2 as can be demonstrated using Perl 5.36.0 MSYS on Windows (shipping with "Git for Windows"), which by the way does not have :crlf enabled by default:
$ perl -E "say for PerlIO::get_layers(*STDOUT)" unix perlio $ perl -Mopen=":std,OUT,:encoding(UCS-2LE):crlf" -E say | od -c 0000000 \r \0 \n \0 → correct $ perl -Mopen=":std,OUT,:crlf:encoding(UCS-2LE)" -E say | od -c 0000000 \r \n \0 → wrong!
The PerlIO doc says:
On DOS/Windows like architectures where this layer is part of the defaults, it also acts like the :perlio layer, and removing the CRLF translation (such as with :raw) will only unset the CRLF translation flag.
Talk of a "translation flag" seems to suggest that it is another so-called pseudo-layer which sets a flag respected by other layers, but I don't know whether that is actually the case.
Since Perl 5.14, you can also apply another :crlf layer later, such as when the CRLF translation must occur after an encoding layer.
The very notion that CRLF translation should occur after encoding seems counter-intuitive in the case of UTF-16/UCS-2.
All this seems a bit quirky and not properly specified. To add to the confusion (or possibly gain more insight), read the open::layers doc, where it seems someone with more knowledge than me has studied the issue and made a list of "historical quirks" and "issues".
This is as far as my own research has brought me and I'll leave it there.