12
$\begingroup$

As of v10.0, Expand no longer exhibits this slowdown.

Mathematica expansion of expressions with symbols isn't slow, but with square roots it is very slow. In Maple or Mupad both are fast: with square roots they both take less than a second.

Here is an example.

Expand[(a + b + c + d + e)^100]; // Timing Expand[(Sqrt[2] + Sqrt[3] + Sqrt[5] + Sqrt[6] + Sqrt[7])^100]; // Timing 

enter image description here

enter image description here

Can you reproduce this?

I used Mathematica9.0.0.0 and Maple16 on windows7 64 bit.

Updated

Maple and Mupad also give the exact result, please see following picture. It's equal to Mathematica' result.

enter image description here

$\endgroup$
1
  • 2
    $\begingroup$ Please do not use the bugs tag when you post a new question. This tag is used to categorize problems which were verified to be due to a bug by the community. If the consensus is that it's a bug, the tag will be added eventualy. $\endgroup$ Commented Feb 1, 2013 at 20:14

3 Answers 3

17
$\begingroup$

Looks like a computation that rightfully should be slow, you end up with close to 5 million sequences, and I suppose it's checking each constantly throughout to see if the products of the squares lead to square-able numbers.

Why is Maple and Mupad faster? I can't say, but looking at the timing I would suspect they are just using approximate floats, leading to it calculating a single number and then taking that to 100. In Mathematica the second line is treated in a sense just as symbolically as the first, for instance Sqrt(2) will not be converted to the approximate real value: 1.41421..., but it does check along the calculations and convert when the root can be found exactly such as Sqrt[4] evaluating to 2. If you convert to approximate numerical values, then it is faster:

 Expand[N[Sqrt[2] + Sqrt[3] + Sqrt[5] + Sqrt[6] + Sqrt[7]]^100]; // AbsoluteTiming {0., Null} 

Note the N function, which converts to floating point approximations.

Update

Interesting it would seem the reason for the large difference in performance is that Mathematica expands and calculates the product at once rather than iteratively. This means that if some common subsequences can be simplified it will carry out those simplifications multiple times rather than once while iterating through it.

This mush faster code runs through with one term at a time expanding each time:

 term = (Sqrt[2] + Sqrt[3] + Sqrt[5] + Sqrt[6] + Sqrt[7]); Nest[Expand[# term] & , term, 100-1]; // AbsoluteTiming (* {0.120012,Null} *) 

To see how Mathematica expands the series you can use

 f /: Times[a___, b_f, c___] := (Print[a, b, c]; Times @@ (First /@ {a})) (f[a] + f[b] + f[c])^6 // Expand (* f[a] f[b] f[c] *) (* f[b] 6 f[a]^5 *) (* f[a] 6 f[b]^5 *) (* ... etc. *) 
$\endgroup$
6
  • $\begingroup$ No, Maple also gives the exact result. $\endgroup$ Commented Jan 25, 2013 at 8:29
  • 3
    $\begingroup$ @jVincent: Your workaround hits the point and actually seems to work very well. But you have some errors there which you might want to correct, it should rather be Nest[Expand[#*term] &, term, 100 - 1]. Then the results can be checked to be the same (I only checked for 50, not 100). $\endgroup$ Commented Jan 25, 2013 at 9:45
  • $\begingroup$ @AlbertRetey I wouldn't call it a workaround. They are two different methods of achieving the same result, and in this case one is faster. I would wonder however if anyone knows if there are arguments for using Mathematica's rather than iterating. Perhaps it's due to elements of Expand's other functionality. $\endgroup$ Commented Jan 25, 2013 at 9:54
  • 1
    $\begingroup$ @jVincent: actually I don't care what we call it, fact is that the most straightforward input does in this case perform very poor, and for some reason maple handles that case better internally. It could of course have deeper reasons that Mathematica does the expansion the way it does and even considered to be a feature, e.g. to avoid accumulation of numeric errors -- but whatever these reasons are they don't matter for this case which thus could be improved. By the way, I think the precedence of // and & is still not correct in your answer (sometimes FullForm has its advantages :-) $\endgroup$ Commented Jan 25, 2013 at 10:06
  • 1
    $\begingroup$ @AlbertRetey I updated the timing. And indeed, I would expect that Maple is just doing it iteratively while Mathematica evaluates each possible sequence. I expect it's a case where complexity analysis dictates the suboptimal algorithm for this case. I think the Maple style expansion will worst case get 5^O(n) terms that can then be collected, while Mathematica already has all the trival terms collected, and therefore only evaluates around 5000 terms. However, since the iterative approach can quite effectively reduce the number of terms, it's nowhere near the upper limit for this example. $\endgroup$ Commented Jan 25, 2013 at 10:38
1
$\begingroup$

This no longer reproduces since Mathematica 10.0, although it does reproduce on version 9. So I assume it has been fixed.

$\endgroup$
0
$\begingroup$

Mathematica is doing it symbolically. Try:

Expand[(Sqrt[2.] + Sqrt[3.] + Sqrt[5.] + Sqrt[6.] + Sqrt[7.])^100] // Timing 

{0., 1.06187*10^102}

almost instantaneous for a numerical result

$\endgroup$
6
  • $\begingroup$ As the OP has stated maple is also doing this symbolically and still is much faster than Mathematica. It is a well known fact that each of these systems has its strengths and weaknesses. I think it would fit this site well if we try to avoid the impression that users with high reputation like you are not willing to see the weaknesses that Mathematica unavoidably has in some areas. As jVincent has shown there is a workaround which makes Mathematica almost as fast as maple for this particular case... $\endgroup$ Commented Jan 25, 2013 at 9:58
  • 1
    $\begingroup$ @AlbertRetey I'm as critic of the product as a user can be. I've just stated a (perhaps obvious) fact, and that was done before the OP wrote his update $\endgroup$ Commented Jan 25, 2013 at 13:31
  • 1
    $\begingroup$ @AlbertRetey At the time of writing of this answer OP did not include the information needed to realize that he was in fact comparing two analytically solutions. As such both me and belisarius belived it was a case of comparing numerical to analytical timings, which is often the case when people observer unexpected slow code. $\endgroup$ Commented Jan 25, 2013 at 14:29
  • $\begingroup$ @belisarius: I now understand that you didn't have the information when writing this that I had when reading it, which of course makes quite a difference. Sorry if my critizism was unjustfied, the answer(s) just looked like a defence reaction that I think is common but unnecessary (and IMHO even counterproductive). $\endgroup$ Commented Jan 26, 2013 at 16:00
  • $\begingroup$ @AlbertRetey No problem, sometimes the timeline of the events on the site isn't clear enough at first sight. As a matter of fact I plan to delete this answer in a few days, since it's pointless now. Just as a sidenote, I try to stay far away from bigotry too (and agree about the counter-productiveness of such positions) $\endgroup$ Commented Jan 26, 2013 at 19:08

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.