I've often seen this quote used to justify obviously bad code or code that, while its performance has not been measured, could probably be made faster quite easily, without increasing code size or compromising its readability.

In general, I do think early micro-optimizations may be a bad idea. However, macro-optimizations (things like choosing an O(log N) algorithm instead of O(N^2)) are often worthwhile and should be done early, since it may be wasteful to write a O(N^2) algorithm and then throw it away completely in favor of a O(log N) approach.

Note the words **may be**: if the O(N^2) algorithm is simple and easy to write, you can throw it away later without much guilt if it turns out to be too slow. But if both algorithms are similarly complex, or if the expected workload is so large that you already know you'll need the faster one, then optimizing early is a sound engineering decision that will reduce your total workload in the long run.

Thus, in general, I think the right approach is to find out what your options are before you start writing code, and consciously choose the best algorithm for your situation. Most importantly, the phrase "premature optimization is the root of all evil" is no excuse for ignorance. Career developers should have a general idea of how much common operations cost; they should know, for example,

 - that a character 'a' tends to be cheaper than a string "A"
 - the advantages of array/vector lists over linked lists, and vice versa
 - when to use a hashtable, when to use a sorted map, and when to use a heap
 - that (if they work with mobile devices) "double" and "int" have similar performance on desktops but "int" may be a hundred times faster on low-end mobile devices without FPUs; 
 - that transferring data over the internet is slower than HDD access, HDDs are vastly slower than RAM, RAM is much slower than L1 cache and registers, and internet operations may block indefinitely (and fail at any time).

And developers should be familiar with a toolbox of data structures and algorithms so that they can easily use the right tools for the job.

How early to optimize, and how much to worry about performance depend on the job. When writing scripts that you'll only run a few times, worrying about performance at all is usually a complete waste of time. But if you work for Microsoft or Oracle and you're working on a library that **thousands of other developers** are going to use in thousands of different ways, it may pay to optimize the hell out of it, and plan to do so from the beginning, so that you can cover all the diverse use cases efficiently. Even so, the need for performance must always be balanced against the need for readability, maintainability, extensibility, and so on.