I wrote a whole bluesky thread about this: https://bsky.app/profile/cecisharp.bsky.social/post/3ld2bpp5qj22h
Here's the fix and the short explanation:
locale::global(locale("en_US.UTF-8")); wcout.imbue(locale("en_US.UTF-8")); // Console output wcout << L"Unicode character: \u03C6 (φ)" << endl; // File output wofstream outFile("output.txt"); outFile.imbue(locale("en_US.UTF-8")); outFile << L"Unicode character: \u03C6 (φ)" << endl; outFile.close(); return 0;
Stick that in your main. Remember to add this at the top:
#include <fstream>
Run the code. A file will be created amongst your source file. Check if the file correctly reads: "Unicode character: φ (φ)"
If the file reads with the correct symbol, but the console doesn't show you the correct symbol, then the code is correct, but you have a problem with your console. if the opposite is true, then the problem is with your code... For that, read the longer answer.
Open your command prompt and run this command: chcp 65001
This is just setting your codepage to 65001, however, I don't actually think this is the problem. I think you probably are seeing a character that isn't a question mark, but isn't the symbol you want either.
Some console fonts, like Consolas, use glyph substitution for characters they don’t fully support, so you might be seeing a fallback glyph for the Russian character.
Older Windows consoles (like Command Prompt) don’t fully support all Unicode characters, even with UTF-8 enabled.
To show special characters you need to:
- Have a font that supports the character.
- Save the character as a wide character if it's not part of asciis normal range.
- Make sure your locale understands that character.
Now here's the long answer:
Why do some characters print on the console, and others don’t? Your immediate thought might be, "Oh, the console’s font doesn’t support those characters." And yeah, that makes sense. Except... it’s not always true.
For example, the default font for most Windows consoles is Consolas. If you open the Character Map, you’ll see that Consolas supports a ton of characters. Including the square symbol, ■. So... why isn’t it showing up?
Your next guess might be, "Maybe it’s because I’m using an extended ASCII character, and I need to declare it as a wide character." Hmm. Nope, that didn’t work either.
Okay, forget ASCII for a second. What if we assign the character using its Unicode code? Hmm... still nothing.
Fine. What if we skip all that and just look up the ASCII value for the character, assign that number to a char, and print it that way? Oh, now it works! Why?
Well, the answer involves bytes, encoding, and how your program interprets text. Let’s break it down.
Why Assigning the Number Directly Works
When you assign a char like this:
char ch = 254; cout << ch;
It works. Why? Because a char in C takes up exactly 1 byte—that’s 8 bits. And 254 fits perfectly into those 8 bits.
Here’s what happens:
You assign 254 to the char. Internally, the program stores it as the binary value 11111110. The console reads this byte, looks it up in its active code page (like CP437), and renders it as ■. This works because there’s no interpretation or decoding involved. You’re giving the program exactly what it needs, so it just prints the symbol without any fuss.
But what about this code?
char ch = '■'; cout << ch;
Why doesn’t that work? After all, it’s the same character, right? Well, here’s where encoding comes into play.
Remember that our code is nothing more than a text file, that we're giving to some IDE to translate into binary. The encoding we use to save our source file will determine how that translation is done.
Encoding is essentially the "translation system" that tells the computer how to interpret, store, and display text symbols. It’s important because most of what we see on a computer screen is text. You’ll even see it when saving something in notepad... And since our source file is nothing more than a text file at the end of the day, we also save it with a specific encoding.
Most people probably encode their source files as UTF-8 without even knowing it. This is the standard. So, what is UTF-8 encoding? Well it’s short for "Unicode Transformation Format - 8-bit") and it’s a variable-length character encoding.
Basically it’s a kind of encoding that understands all Unicode symbols, and stores them in variables of different lengths.
Can you see where I’m going with this? In C, a character is always only one byte. But with UTF-8 encoding, characters can have varying lengths. In fact, with UTF-8 encoding, characters in the ASCII range (0–127) are encoded in 1 byte and have the same binary values as ASCII while less common characters, like our square, use 2–4 bytes.
So when we write this code here:
char ch = L'■'; cout << ch;
... and save the source file with UTF-8 encoding, then run the program, we end up trying to fit multiple bytes into one byte, which the program realizes isn’t gonna work, and defaults to a question mark.
Alright, so what if we use a wchar_t instead? Like this:
wchar_t ch = L'■'; wcout << ch;
That gives wchar_t enough space to store the character, so it should work, right? Nope. Not yet.
The issue here isn’t the storage space—it’s the locale.
By default, C++ uses the "C" locale. This is a minimal locale that only understands basic ASCII characters. It doesn’t know what ■ is, even if you’ve stored it correctly.
To fix this, you need to tell your program to use a locale that understands Unicode. For example:
locale::global(locale("en_US.UTF-8")); wchar_t ch = L'■'; wcout << ch;
This one will work.
With this line, you’re switching to the English (US) locale with UTF-8 encoding, which can handle Unicode characters. Now the program knows how to interpret L'■' and display it properly.
So, let’s go back to everything we tried:
Assigning the Number Directly: Worked because we skipped all encoding and just gave the program the byte 254. The console knew how to render it.
Using a Literal: Failed because the source file was saved as UTF-8. The program couldn’t fit the 3-byte UTF-8 sequence for ■ into a single char.
Using a Wide Character: Failed until we set the locale. Even though wchar_t could store the character, the default "C" locale didn’t understand Unicode.
Setting the Locale: Worked because it allowed the program to interpret wide characters as Unicode.
std::cout. (And even when those two steps are done correctly it's a different matter altogether of correctly displaying the character inside whateverstd::coutis connected to.)