TL;DR: How do you interpret matrix/vector multiplication if the matrix is a graph adjacency matrix?
There exists a bijection between $\mathbb{R}^{m \times m}$ and directed weighted graphs. Specifically, you can define $M \in \mathbb{R}^{m \times m}$ as the adjacency matrix of a weighted graph with $m$ nodes, where $M_{ij}$ is the weight of the edge from nodes $i \rightarrow j$.
One famous application of this is in Markov processes. We define a markov process to be any weighted graph with the property that the outgoing weight from every node sums to 1. So all rows of the adjacency matrix sum to 1.
Let $P_m$ be the set of $m$ node markov processes- $P_m \subset \mathbb{R}^{m \times m}$.
If you have some markov process $\bar{M} \in P_m$, and you also have some "weight" vector $x \in \mathbb{R}^{m}, |x| = 1$, then $$\bar{M}x$$
Will output a "new" weight vector, $|\bar{M}x| = 1$, which represents what happens to the weight vector $x$ after one timestep on the markov process.
If you have
$$\bar{M}x^{*} = x^{*}$$ then $x^{*}%$ (an eigenvector of $\bar{M}$) is the steady state distribution of the markov process. This fact underlies the famous PageRank algorithm that catalyzed Google. Specifically modelling the web as a markov process, $x^*$ is the pagerank vector of the web, and can be used for determining importance of web pages.
Thus for Markov processes, I have an intuitive understanding of what matrix multiplication "means" for unit vectors and markov processes. But I can't help but wonder if there is a similar intuitive understanding if you relax the markov process/unit vector constraint.
My question specifically: What does matrix multiplication represent in the more general case of $\mathbb{R}^{m \times m}$?, and $x \in \mathbb{R}^m$? ie: when the sum of outgoing weights does not sum to 1? Is there an "intuitive" explanation of what this physically means? Ie: "number of paths between two nodes" is more in line than some double summation formula with no motivation.