I am wondering why unicode encoding is necessary in JavaScript. I am looking at [utf8.js](https://github.com/mathiasbynens/utf8.js) as an example. I am also looking at the [utf8 spec](https://encoding.spec.whatwg.org/#utf-8), but am not really following the different pieces of data. Also, I don't fully understand when and where you are supposed to encode/decode, and what the "current" format is of some set of bytes in a programming language, so that's making it difficult to follow this.

> UTF-8’s encoder’s handler, given a stream and code point, runs these steps:...

I would like to know what the _stream_ is, and what the _code point_ is. I understand that [code points](https://www.quora.com/In-the-Unicode-standard-what-is-the-difference-between-a-code-unit-and-a-code-point) are characters, which in utf8 might be composed of multiple 8-bit code units.

I would also just generally like to know why you can't just have in a language "unicode", and there is no utf8/utf16 encoding/decoding, because it is already in let's say utf8 format. This makes me wonder if JavaScript needs to do this encoding/decoding for some reason, such as maybe because JavaScript uses utf16 encoding, and so the stream of bits you pass to utf8.js is a stream of utf16 encoded bits. Or maybe it's not that, and instead the stream of bits you pass to utf8.js is a stream of `x` (_something else_) such as decimal encoding or whatever that may mean.

I am just not sure what format/encoding the bytes are coming in as, and also why we need to do the conversion.