The only time I would avoid Unicode is in an embedded system where the requirements specifically state the system only needs to support a single code page (or ASCII).
Software is almost too easy to reuse. Whether a public project that will be used in ways the author is aware or did not envision, or corporate projects that some suit repurposes, you never know when and where software will be reused. With our global Internet people of all languages may have a use for your software, and it should support languages such as Chinese which are in widespread use and require Unicode to function well.
Embedded systems (a category in which I do NOT include smartphones) are the only domain I can think of that would resist the trend of software being used in diverse locations.
Edit: I just realized I did not really specify why I would avoid Unicode in those situations, even though the answer is fairly obvious. While some combinations of characters and encodings can take up the same space as an 8-bit character (e.g. UTF-8 English), not all will. This can increase space, especially when using characters that necessarily must use multiple bytes (e.g. Chinese, spoken by billions of people). Furthermore, decoding Unicode and transforming it to a glyph on a user interface requires additional code for which an embedded system may not have memory. If I had to develop a routine to transform ASCII characters into glyphs it would likely be a fairly small lookup table, and not involve decoding a variable-length character into a code plane with thousands of glyphs.