The Series-based method works from the inside out by creating a SeriesData from each factor and then multiplying them. For the multiplication it effectively uses ListConvolve iteratively (not maximally efficient) on lists of approximate numbers. This is fast vector arithmetic.
Here is more or less the process, shown outside of Series. We set up the example.
n = 500; precision = MachinePrecision; l = RandomReal[{}, n, WorkingPrecision -> precision]; poly = Times @@ (x - l);
Reference expansion timing.
AbsoluteTiming[p1 = Expand[poly];] (* Out[53]= {0.094142, Null} *)
Timing to convert all factors of poly to lists.
AbsoluteTiming[ poly2 = Map[CoefficientList[#, x] &, Apply[List, poly]];] (* Out[18]= {0.000607, Null} *)
Timing to convolve iteratively.
AbsoluteTiming[ plist = Fold[ListConvolve[#1, #2, {1, -1}, 0] &, First[poly2], Rest[poly2]];] (* Out[54]= {0.002722, Null} *)
Timing for converting back to a polynomial.
AbsoluteTiming[p3 = plist . x^Range[0, n];] (* Out[55]= {0.000479, Null} *)
Check that they are (more or less) equal.
In[73]:= p1 - p3 Out[73]= 0.
Here is the Series timing for comparison.
AbsoluteTiming[p2 = Normal[poly + O[x]^(n + 1)];] (* Out[77]= {0.021734, Null} *)
This is notably faster than direct expansion but also notably slower than the sum of the individual steps shown above. The difference is unavoidable Series overhead.
Expand[]beatsO[x]forn=2500for me, butO[x]is quite a bit faster up to aroundn=2000, which is remarkable. $\endgroup$