Addendum 2:
To clarify for @MauryMarkowitz.
We distinguish between p-code and tokenization. P-code is "just like" machine code, it's just machine code for an ideal, pseudo machine rather than a specific CPU. It's notable as typically, especially on the older architectures, to be more compact than actual machine code.
Tokenization is basically a step past parsing. Reducing commands to internal "tokens", or ids (like the text "PRINT" to the value 123. Converting numeric source strings to binary, etc. Languages have, typically several phases of processing. The first two are commonly converting the text in to tokens (this phase is called "lexing", lexicographic analysis), then parsing.
Lexing converts raw text in to higher level constructs (i.e. tokens), then the parse phase works off of tokens instead of raw text.
For example, given a string:
IF "BOB" 123 FOR THEN That's could lex into <IF-KEYWORD>, <STRING>, <NUMBER> <FOR-KEYWORD>, <THEN-KEYWORD>. The lexer can identify all of these, it's up to the parser to enforce actual syntax.
MS-BASIC tends to do both when a line of code is entered. It lexes the string in to tokens, then it parses for syntax correctness. At the end, you end up with a stream of tokens, in fact you have the same string of tokens you started with. But now, at the end of the process, you "know" that the token list is syntactically correct. This can be sent to an evaluator without the need for it to do any extra error checking (beyond simple corruption).
At the end, this tokenized form is NOT "compiled code". It's an intermediate state for use by the evaluator.
P-code is "compiled code".
Consider this example: (using contrived elements)
A = 1 + 2 * B As a token stream, this looks like: <VAR-A> <ASSIGN> <NUMBER-1> <PLUS> <NUMBER-2> <MULTIPLY> <VAR-B>.
This is what is sent to the evaluator.
In a P-code, it will look something like this:
PUSH 2 PUSH <VAR-B> MULTIPLY PUSH 1 ADD STORE A This is a simple stack machine. Like an HP calculator.
A key thing to note is how different the equation looks. In the token stream, it's still roughly in "algebraic" form. The evaluator still needs to deal with issues of operator precedence, among other things.
In the second form, the compiler has already dealt with that and created instruction stream appropriately. The compiler can actually rewrite the source code in to a form more suitable for it's efficient execution.
Using our IF-MODIFIER, as @Dirkt mentioned:
A = A + 1 IF A > 0 The token stream is: <VAR-A> <ASSIGN> <VAR-A> <PLUS> <NUMBER-1> <IF-KEYWORD> <VAR-A> <GREATER-THAN> <NUMBER-0>
Taken at face value, an evaluator does not have enough information properly evaluate this. The compiler gets to process it as whole, and can rewrite it, into something like this:
PUSH <VAR-A> PUSH 0 COMPARE BRANCH-LESS-THAN-EQUAL LBL1 PUSH A PUSH 1 ADD STORE A LBL1: You'll notice, again, this looks nothing like the source code or the token stream.
This is a key distinction between a tokenized stream and a compiled representation. There's nothing stopping MS-BASIC from going the compile to p-code route, it just turns out that the tokenized representation has memory savings over the compiled version. You get a "running program" for little more than the memory cost of the source code, vs the compiled version where you have to maintain both the source code and the compiled artifact.
This is notable for an interactive environment, obviously not an issue for a normal "compiled" system. But RSTS BP is an interactive system with a compiler, which makes it stand out in the world of BASICs.