#define by itself will replace the symbol with nothing.
On the other hand, #define 1, as you call it, will replace the symbol with 1 everywhere it is found in the file. So, for example, the following code:
#include <iostream> #define EXAMPLE "hello" int main() { std::cout << EXAMPLE; return 0; }
prints
hello
This is because EXAMPLE here is replaced with "hello", making the print statement equivalent to:
std::cout << "hello";
If we change the #define statement to this instead:
#define EXAMPLE
This will give a compile error:
main.cpp: In function ‘int main()’: main.cpp:15:25: error: expected primary-expression before ‘;’ token std::cout << EXAMPLE;
As to why you would ever use this second form of #define, it's because there is another processor directive that you can use called #ifdef:
#include <iostream> #define EXAMPLE int main() { #ifdef EXAMPLE std::cout << "EXAMPLE defined."; #endif return 0; }
This will print:
EXAMPLE defined.
Because #ifdef (and its relative #ifndef) only require that the symbol be defined, we don't really need to give it a value. It just needs to be there to work.
A common place you see this kind of stuff is with header guards (which is probably what you're seeing). You can also see it with platform identification, or even to determine whether the compiler is using C++ or not.
__NEWLIB_H__) and names that begin with an underscore followed by a capital letter are reserved for use by the implementation. Don’t use them in your code.