24

In some of the projects I'm working on as part of my day job, I need to access data in very large JS objects (on the order of thousands of key-value pairs). I'm trying to improve the efficiency of my code, so I came up with a few questions:

  1. What is the runtime complexity of JS when accessing a field in such an object? My initial hunch is that it's O(n)
  2. Is there a difference when accessing through dot notation or bracket notation? (e.g. obj.field vs obj[field])
  3. I'm guessing there is a different answer for different runtime engines - is there a place where I can see the difference between them?
1
  • 1
    On V8 also depends on how the object is threatened. If delete is used, the object is converted internally to another type which is way less performant, for example. Take a look into Map. Don't know performances but maybe is faster (In your situation, I mean). Commented Jul 23, 2017 at 15:46

3 Answers 3

43

Javascript objects are actually Hashes, so the complexity is O(1) for all engines.

obj.field is an alias for obj['field'], so they have the same performances.

You can find some JS hashes performance tests here, unfortunately only for your browser engine.

Sign up to request clarification or add additional context in comments.

Comments

8

In the worst case an JS object is represented as a hash table and has the same lookup complexity: O(1) on average but O(n) at worst. The hash table implementation is your case I guess, because you have so many items in the object. There is no difference how you access a property, obj.field and obj['filed'] are the same.

It's also worth to mention that the complexity isn't always equal to complexity of a hash table lookup, it's faster in much cases. Modern JS engines use techniques called hidden classes and inline caching to speed up the lookup. It's a pretty big question, how these techniques work, I've explained it in another answer.

Some relative resources:

4 Comments

If it implemented with Hast Table. The worst case of time complexity would be O(n) or at least O(log n). How can it be O(1)?
@tsh Indeed, I put it wrong, thanks for correcting me! I've updated the answer.
The complexity of the hash function would depends on the length of the key, right? For the cases where the key is a string, then can it be O(1)?
@Aldrin correct, „XY is O(…)“ always implies many assumptions (that are usually not stated). In case the engine uses a hashmap one could also say it is O(n + m) where m is the length of the key or O(n * m) if the engine uses a list of keys (this usually is the default representation). All that is purely hypothetical though for „some hypothetical engine“. Reality is more complex
1

The other answers make unproven claims about "JavaScript" - The specification as such does not guarantee any time complexity on key lookup. As accessing objects with many keys is a common pattern though, most engines will likely optimize for it, and have an object representation that provides fast property access. But there might very well be other representations optimized for other usecases that have a lookup complexity of O(n).

If you want a datastructure that guarantees a fast key lookup, you might want to consider using a Map for which the specification states:

Maps must be implemented using either hash tables or other mechanisms that, on average, provide access times that are sublinear on the number of elements in the collection. The data structure used in this specification is only intended to describe the required observable semantics of Maps. It is not intended to be a viable implementation model.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.