Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

3
  • Hi Cody Gray, Thanks for your reply. Just one more question to follow up. I saw a big speed up due to these vex prefixed instructions when compiling using fastmath. Do you have any idea how these vex-prefix instructions perform better than the original x87 instructions? Thanks Commented Jun 2, 2016 at 10:27
  • @PST Well, the regular SSE instructions tend to be faster than the x87 instructions. There are multiple complicated reasons for that. One of the most significant is that the x87 FPU works off of a stack-based system, with all of the attendant limitations, whereas the SSE implementation uses registers. That means no time is wasted pushing/popping values on the stack, or exchanging values at different positions on the stack. Another reason that SSE is faster than x87 is simply it is a newer implementation and has been optimized accordingly. Commented Jun 7, 2016 at 5:42
  • Then, my answer already explains why the VEX-prefixed SSE instructions are faster than regular SSE instructions. So you are getting the benefit of two performance improvements: first switching from x87 to SSE, and then switching from SSE to VEX-encoded SSE. The Intel engineers have to have been up to something the past 15-20 years. :-) Commented Jun 7, 2016 at 5:44