My statistics professor says:
Let $X_n \xrightarrow{d} X$, $Y_n \xrightarrow{d} y$, where $y$ is some constant. Suppose for generality that $X \in \mathbb{R}^n$, and $y \in \mathbb{R}^m$. Then $$\begin{pmatrix}X_n \\ Y_n \end{pmatrix} \xrightarrow{d} \begin{pmatrix}X\\y\end{pmatrix}$$
I saw here that this isn't generally the case. Why do we have this when $Y_n$ converges in distribution to a constant? My Professor also said that this doesn't have to do with independence and I'm thinking about it the wrong way, but I'm unsure about this. A clarification on this particular point would be helpful.
(I also don't fully understand the proof from Rao (1973) page 122/123. Why do those limits imply the convergence in distribution result, and what does it mean for that limit to be indeterminable?)
For context, the result from Rao (1973):
If $y > c$, then
$$ \begin{multline} P(X_n < x) \geq P(X_n < x, Y_n \leq y) = P(X_n < x) - P(X_n<x, Y_n > y) \geq \\ P(X_n < x) - P(Y_n > y), \end{multline} $$
so in the limit we have
$$ \lim_{n \rightarrow \infty} P(X_n < x, Y_n < y) = P(X < x), $$
and $0$ if $y<c$.
I get this, but I just don't understand what is meant by
When $y=c$, the limit is indeterminable and $(x, c)$ is a discontinuity point for any $x$.