Note that you can't find the bottlenecks until you have written a program to measure, which means that some performance decisions must be made before anything exists to measure. Sometimes these decisions are difficult to change if you get them wrong (for example, if you choose to write your software in JavaScript instead of C#/Java, your code will have a much lower performance ceiling, which might be fine or frustrating depending on whether you hit that ceiling and how hard it is to overcome it when it is reached). For this reason, it's good to have a general idea of what things cost so you can make reasonable decisions when no hard data is available.
How early to optimize, and how much to worry about performance depend, depends on the job. When writing scripts that you'll only run a few times, worrying about performance at all is usually a complete waste of time. But if you work for Microsoft or Oracle and you're working on a library that thousands of other developers are going to use in thousands of different ways, it may pay to optimize the hell out of it, so that you can cover all the diverse use cases efficiently. Even so, the need for performance must always be balanced against the need for readability, maintainability, elegance, extensibility, and so on.