What does it mean when I type in an escaped Unicode string literal in my C++ program?
To quote the standard:
A universal-character-name is translated to the encoding, in the appropriate execution character set, of the character named. If there is no such encoding, the universal-character-name is translated to an implementation-defined encoding.
Typically, the execution character set will be ASCII, which contains a character with value 1. So \u0001 will be translated into a single character with value 1.
If you were to specify non-ASCII characters, like \u263A, you might see more than one byte per character.
Shouldn't it take 4 bytes for 2 characters? (assuming utf-16)
If it were UTF-16, yes. But string can't be encoded with UTF-16, unless char has 16 bits, which it usually doesn't. UTF-8 is a more likely encoding, in which characters with values up to 127 (that is, the whole ASCII set) are encoded with a single byte.
Why are the first two characters of s (first two bytes) equal?
With the above assumptions, they are both the character with value 1.
Lon the string literal, then I suppose it's often. But that's not what we have here.char, with the UTF-16 as the basic execution character set. (I don't know of any that do, but the standard definitely allows it.)