48

I was browsing Google Code when I chanced upon this project called JSpeed - optimization for Javascript.

I noticed one of the optimization was to change i++ to ++i in for loop statements.

Before Optimization

for (i=0;i<1;i++) {} for (var i = 0, j = 0; i < 1000000; i++, j++) { if (i == 4) { var tmp = i / 2; } if ((i % 2) == 0) { var tmp = i / 2; i++; } } var arr = new Array(1000000); for (i = 0; i < arr.length; i++) {} 

After optimization

for(var i=0;i<1;++i){} for(var i=0,j=0;i<1000000;++i,++j){if(i==4){var tmp=i>>1;} if((i&1)==0){var tmp=i>>1;i++;}} var arr=new Array(1000000);for(var i=0,arr_len=arr.length;i<arr_len;++i){} 

I know what pre and post increments do, but any idea how does this speeds the code up?

8
  • 105
    Does optimization mean squeezing all the code together to make it unreadable? Genius! Commented Oct 10, 2009 at 3:47
  • 2
    nope. optimization is actually to improve and speed up certain parts of the code making it efficient and less CPU-costing. squeezing code together making it unreadable might be also called packing or minifying - and that is not necessary optimization, since it takes time to unpack. Commented Oct 10, 2009 at 4:45
  • 7
    Since when does parser doesn't need to unpack anything? The optimization here is transport, not performance. Commented Oct 10, 2009 at 8:26
  • 1
    This is also true in many other languages/compilers. Commented Dec 12, 2009 at 22:53
  • 3
    There is actually an optimization, the divisions by 2 have been replaced by a right shit operation. Commented Sep 12, 2016 at 11:34

9 Answers 9

72

This is what I read and could answer your question: "preincrement (++i) adds one to the value of i, then returns i; in contrast, i++ returns i then adds one to it, which in theory results in the creation of a temporary variable storing the value of i before the increment operation was applied".

Sign up to request clarification or add additional context in comments.

5 Comments

It came from: physical-thought.blogspot.com/2008/11/…. As I understand, the practice may be different per compiler. By the way: via home.earthlink.net/~kendrasg/info/js_opt you may learn more about javascript optimization.
Hi Kooilnc - yep saw that blog post by googling. thanks a lot.
see this performance test: jsperf.com/…
i = 1; i = i++; console.log(i); // 1 i = 1; i = ++i; console.log(i); // 2
Note that this difference is irrelevant if you're not using the returned value. Any decent compiler can tell when this is the case and generate identical code for both. In particular, when incrementing the loop control variable, the returned value isn't used.
58

This is a faux optimization. As far as I understand it, you're saving 1 op code. If you're looking to optimize your code with this technique, then you've gone the wrong way. Also, most compilers/interpreters will optimize this for you anyway (reference 1). In short I wouldn't worry about. But, if you're really worried, you should use i+=1.

Here's the quick-and-dirty benchmark I just did

var MAX = 1000000, t=0,i=0; t = (new Date()).getTime(); for ( i=0; i<MAX;i++ ) {} t = (new Date()).getTime() - t; console.log(t); t = (new Date()).getTime(); for ( i=0; i<MAX;++i ) {} t = (new Date()).getTime() - t; console.log(t); t = (new Date()).getTime(); for ( i=0; i<MAX;i+=1 ) {} t = (new Date()).getTime() - t; console.log(t); 

Raw results

Post Pre += 1071 1073 1060 1065 1048 1051 1070 1065 1060 1090 1070 1060 1070 1063 1068 1066 1060 1064 1053 1063 1054 

Removed lowest and highest

Post Pre += 1071 ---- 1060 1065 ---- ---- 1070 1065 1060 ---- 1070 1060 1070 1063 ---- 1066 1060 1064 ---- 1063 1054 

Averages

1068.4 1064.2 1059.6 

Notice that this is over one million iterations and the results are within 9 milliseconds on average. Not really much of an optimization considering that most iterative processing in JavaScript is done over much smaller sets (DOM containers for example).

3 Comments

My point was that the difference is negligible and can't really be differentiated in smaller datasets (<1000), which is more common in JavaScript than larger data sets. Typically, datasets that are iterated over in JavaScript are DOM collections, which are typically under 200 members. Even still, the bottle neck in these situations is the DOM, not the minimal optimization of pre vs post vs +=
@mauris - "1 op * n iterations can be a lot" only if considered absolutely; in any real code it will be only a tiny part of the entire loop and so when looked at relatively to the whole operation will be negligible. a 9 ms difference on a loop that takes 1s means it is not important
I don't think this is good enough evidence to say i += 1 is any better. The numbers are too close - better to check the bytecode as Sylvian Leroux did.
14

In theory, using a post-increment operator may produce a temporary. In practice, JavaScript compilers are smart enough to avoid that, especially in such trivial case.

For example, let's consider that sample code:

sh$ cat test.js function preInc(){ for(i=0; i < 10; ++i) console.log(i); } function postInc(){ for(i=0; i < 10; i++) console.log(i); } // force lazy compilation preInc(); postInc(); 

In that case, the V8 compiler in NodeJS produces exactly the same bytecode (look esp. at opcodes 39-44 for the increment):

sh$ node --version v8.9.4 sh$ node --print-bytecode test.js | sed -nEe '/(pre|post)Inc/,/^\[/p' [generating bytecode for function: preInc] Parameter count 1 Frame size 24 77 E> 0x1d4ea44cdad6 @ 0 : 91 StackCheck 87 S> 0x1d4ea44cdad7 @ 1 : 02 LdaZero 88 E> 0x1d4ea44cdad8 @ 2 : 0c 00 03 StaGlobalSloppy [0], [3] 94 S> 0x1d4ea44cdadb @ 5 : 0a 00 05 LdaGlobal [0], [5] 0x1d4ea44cdade @ 8 : 1e fa Star r0 0x1d4ea44cdae0 @ 10 : 03 0a LdaSmi [10] 94 E> 0x1d4ea44cdae2 @ 12 : 5b fa 07 TestLessThan r0, [7] 0x1d4ea44cdae5 @ 15 : 86 23 JumpIfFalse [35] (0x1d4ea44cdb08 @ 50) 83 E> 0x1d4ea44cdae7 @ 17 : 91 StackCheck 109 S> 0x1d4ea44cdae8 @ 18 : 0a 01 0d LdaGlobal [1], [13] 0x1d4ea44cdaeb @ 21 : 1e f9 Star r1 117 E> 0x1d4ea44cdaed @ 23 : 20 f9 02 0f LdaNamedProperty r1, [2], [15] 0x1d4ea44cdaf1 @ 27 : 1e fa Star r0 121 E> 0x1d4ea44cdaf3 @ 29 : 0a 00 05 LdaGlobal [0], [5] 0x1d4ea44cdaf6 @ 32 : 1e f8 Star r2 117 E> 0x1d4ea44cdaf8 @ 34 : 4c fa f9 f8 0b CallProperty1 r0, r1, r2, [11] 102 S> 0x1d4ea44cdafd @ 39 : 0a 00 05 LdaGlobal [0], [5] 0x1d4ea44cdb00 @ 42 : 41 0a Inc [10] 102 E> 0x1d4ea44cdb02 @ 44 : 0c 00 08 StaGlobalSloppy [0], [8] 0x1d4ea44cdb05 @ 47 : 77 2a 00 JumpLoop [42], [0] (0x1d4ea44cdadb @ 5) 0x1d4ea44cdb08 @ 50 : 04 LdaUndefined 125 S> 0x1d4ea44cdb09 @ 51 : 95 Return Constant pool (size = 3) Handler Table (size = 16) [generating bytecode for function: get] [generating bytecode for function: postInc] Parameter count 1 Frame size 24 144 E> 0x1d4ea44d821e @ 0 : 91 StackCheck 154 S> 0x1d4ea44d821f @ 1 : 02 LdaZero 155 E> 0x1d4ea44d8220 @ 2 : 0c 00 03 StaGlobalSloppy [0], [3] 161 S> 0x1d4ea44d8223 @ 5 : 0a 00 05 LdaGlobal [0], [5] 0x1d4ea44d8226 @ 8 : 1e fa Star r0 0x1d4ea44d8228 @ 10 : 03 0a LdaSmi [10] 161 E> 0x1d4ea44d822a @ 12 : 5b fa 07 TestLessThan r0, [7] 0x1d4ea44d822d @ 15 : 86 23 JumpIfFalse [35] (0x1d4ea44d8250 @ 50) 150 E> 0x1d4ea44d822f @ 17 : 91 StackCheck 176 S> 0x1d4ea44d8230 @ 18 : 0a 01 0d LdaGlobal [1], [13] 0x1d4ea44d8233 @ 21 : 1e f9 Star r1 184 E> 0x1d4ea44d8235 @ 23 : 20 f9 02 0f LdaNamedProperty r1, [2], [15] 0x1d4ea44d8239 @ 27 : 1e fa Star r0 188 E> 0x1d4ea44d823b @ 29 : 0a 00 05 LdaGlobal [0], [5] 0x1d4ea44d823e @ 32 : 1e f8 Star r2 184 E> 0x1d4ea44d8240 @ 34 : 4c fa f9 f8 0b CallProperty1 r0, r1, r2, [11] 168 S> 0x1d4ea44d8245 @ 39 : 0a 00 05 LdaGlobal [0], [5] 0x1d4ea44d8248 @ 42 : 41 0a Inc [10] 168 E> 0x1d4ea44d824a @ 44 : 0c 00 08 StaGlobalSloppy [0], [8] 0x1d4ea44d824d @ 47 : 77 2a 00 JumpLoop [42], [0] (0x1d4ea44d8223 @ 5) 0x1d4ea44d8250 @ 50 : 04 LdaUndefined 192 S> 0x1d4ea44d8251 @ 51 : 95 Return Constant pool (size = 3) Handler Table (size = 16) 

Of course, other JavaScript compilers/interpreters may do otherwise, but this is doubtful.

As the last word, for what it worth, I nevertheless consider as a best practice to use pre-increment when possible: since I frequently switch languages, I prefer using the syntax with the correct semantic for what I want, instead of relying on compiler smartness. For example, modern C compilers won't make any difference either. But in C++, this can have a significant impact with overloaded operator++.

Comments

5

Anatoliy's test included a post-increment inside the pre-increment test function :(

Here are the results without this side effect...

function test_post() { console.time('postIncrement'); var i = 1000000, x = 0; do x++; while(i--); console.timeEnd('postIncrement'); } function test_pre() { console.time('preIncrement'); var i = 1000000, x = 0; do ++x; while(--i); console.timeEnd('preIncrement'); } test_post(); test_pre(); test_post(); test_pre(); test_post(); test_pre(); test_post(); test_pre(); 

Output

postIncrement: 3.21ms preIncrement: 2.4ms postIncrement: 3.03ms preIncrement: 2.3ms postIncrement: 2.53ms preIncrement: 1.93ms postIncrement: 2.54ms preIncrement: 1.9ms 

That's a big difference.

2 Comments

I think the reason those are different is because while(i--) has to save the value of i, then decrement i, then examine the prior value of i to decide if the loop is finished. while(--i) does not have to do that extra work. It's very unusual to use i-- or i++ in a conditional test. Certainly in the increment operation of a for statement, but not in a conditional test.
When you use --i, you should set it to 1000001, because it will end up earlier :) But of course, it's not a big difference.
3

Sounds like premature optimization. When you're nearly done your app, check where the bottlenecks are and optimize those as needed. But if you want a thorough guide to loop performance, check this out:

http://blogs.oracle.com/greimer/entry/best_way_to_code_a

But you never know when this will become obsolete because of JS engine improvements and variations between browsers. Best choice is to not worry about it until it's a problem. Make your code clear to read.

Edit: According to this guy the pre vs. post is statistically insignificant. (with pre possibly being worse)

3 Comments

it's more of the increment part rather than the way to access arrays. i know how for(i=0;i<arr.length;i++) can slow down the code (each iteration calls arr.length) - but not how pre and post increment
I don't see anything in your link that discusses pre vs post increment.
Ha! I'm blind. There's no pre vs post in my link. Checking for a proper reference now.
2

The optimization isn't the pre versus post increment. It's the use of bitwise 'shift' and 'and' operators rather than divide and mod.

There is also the optimization of minifying the javascript to decrease the total size (but this is not a runtime optimization).

5 Comments

There is some evidence that pre vs. post does make a difference...depending on the engine.
Can you provide a source? That doesn't make much sense to me.
i know there are other optimizations as well. but if this is not considered part of optimization then why does JSpeed bother including this changing post to pre increment?
The link doesn't reference anything about pre vs. post increment.
Yeah. My mistake. Ignore most of what I've said. I have foggy memories of reading some tests where it did make a difference.
2

This is probably cargo-cult programming. It shouldn't make a difference when you're using a decent compilers/interpreters for languages that don't have arbitrary operator overloading.

This optimization made sense for C++ where

T x = ...; ++x 

could modify a value in place whereas

T x = ...; x++ 

would have to create a copy by doing something under-the-hood like

T x = ...; T copy; (copy = T(x), ++x, copy) 

which could be expensive for large struct types or for types that do lots of computation in their `copy constructor.

Comments

1

Using post increment causes stack overflow. Why? start and end would always return the same value without first incrementing

function reverseString(string = [],start = 0,end = string.length - 1) { if(start >= end) return let temp = string[start] string[start] = string[end] string[end] = temp //dont't do this //reverseString(string,start++,end--) reverseString(string,++start,--end) return array } let array = ["H","a","n","n","a","h"] console.log(reverseString(array))

Comments

0

Just tested it in firebug and found no difference between post- and preincrements. Maybe this optimization other platforms? Here is my code for firebug testing:

function test_post() { console.time('postIncrement'); var i = 1000000, x = 0; do x++; while(i--); console.timeEnd('postIncrement'); } function test_pre() { console.time('preIncrement'); var i = 1000000, x = 0; do ++x; while(i--); console.timeEnd('preIncrement'); } test_post(); test_pre(); test_post(); test_pre(); test_post(); test_pre(); test_post(); test_pre(); 

Output is:

postIncrement: 140ms preIncrement: 160ms postIncrement: 136ms preIncrement: 157ms postIncrement: 148ms preIncrement: 137ms postIncrement: 136ms preIncrement: 148ms 

4 Comments

i've already done the test on firefox. doesn't have much diff as well. theory given on the other answer might be just the answer. thanks for the effort!
Who cares speed wise. Unless you JavaScript is doing zillions it's nit going to be noticable by the end user.
@mP - agreed. but some browsers coughIE... =D
@mP. maybe now with Node.js…

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.