1

I have a simple C++ program that multiplies two long int variables and prints the result. Here's the code:

#include <iostream> using namespace std; int main() { long int a = 100000; long int b = 100000; long int c = a * b; cout << c << endl; return 0; } 

On online compilers, the output is as expected: 10000000000. However, when I run the same code on my computer, the output is 1410065408. I'm confused as to why this discrepancy is occurring.

Could someone please explain why this difference in output is happening? Is it related to compiler settings or some other factor? And how can I ensure consistent behavior across different environments?

7
  • 5
    The online compilers are probably using 64-bit Linux, on which a long has a width of 64 bits, whereas you are probably using Microsoft Windows, on which a long is usually 32 bits wide. I suggest that you modify your program to also print std::numeric_limits<long>::max as well as sizeof(long), to see what these values are on the platforms on which you are running the program. Commented May 17, 2024 at 21:50
  • @AndreasWenzel it's a little maddening that long in Microsoft is only 32 bits, but it's understandable. They're the kings of backwards compatibility, and 20 years ago 32 bits was perfectly defensible - but now they're stuck with it forever so that existing code won't break. Commented May 17, 2024 at 21:54
  • 3
    If you want to use a 64-bit integer type, then you can use int64_t. That type is guaranteed to be 64 bits wide on all platforms. Commented May 17, 2024 at 21:57
  • 1
    @AndreasWenzel I agree it's best to use a type that's guaranteed to be big enough, except that int64_t might not be supported on every platform. But at least if it isn't supported you'll get a compile error instead of some unexpected output. Commented May 17, 2024 at 22:05
  • 1
    @AndreasWenzel numeric_limits<long>::max() is 2147483647 and sizeof(long) is 4 in my device whereas online compiler giving 9223372036854775807 and 8. so, I've learnt that in my PC, int, long has same range and size but long long is bigger, and in online compilers long and long long has same range and size Commented May 17, 2024 at 22:26

1 Answer 1

4

The standard specifies long int is at least 32 bit long. It very often is 64 bits but it could actually be 32. This is up to the compiler, and in your case, both produce different outputs as a result.

32-bit signed integers, overflows at 2147483648.

When overflows occurs for signed integers, the behavior is undefined but very often, you will see it wraps around (the possible behaviors are listed here).
In the end, your calculation which you expect to be 100000 * 100000 effectively becomes (100000 * 100000) % 2147483648 (but again, this is undefined behavior and something else could have occurred on another compiler independently from long being 32 bits).

On the same page I linked above, you will learn that only long long int is guaranteed to be 64-bits. There also exist fixed-width integers you can pick from, set to an explicit length for all platforms.

Final note: for the same reason your code produce different outputs on 2 compilers, the "correct" way to write the integer literals, especially if you use them directly rather than assign them to a variable of an explicitly declared type, is to use a suffix.
E.g.: 100000 is a int, 100000L is a long and 100000LL is a long long int.
What I wrote above really should have been (100000LL * 100000LL) % 2147483648LL.

Sign up to request clarification or add additional context in comments.

4 Comments

then should I use int for small numbers and long long for possibly longer than int range numbers so that I don't have to worry about compiler or device?
% 2147483648 should be % 4294967296. Even if the number is signed, it will not wrap according to LONG_MAX+1, but rather ULONG_MAX+1. However, the result after wrapping could be negative, which the mathematical formula you posted does not take into account.
You should use long long if you think there is any chance for 32-bit numbers to be too small for your usage, that part is for sure. On modern hardware, the benefit of smaller numbers is not so obvious anymore. I'd say unless a program manipulates many, many integers and/or the hardware it runs on is limited, I do not see a huge benefit of using smaller numbers: I'd rather have the guarantee numbers never overflow over consuming so little (in comparison to what is available) resources on e.g. my laptop.
@AndreasWenzel: Yes, that is correct but the expression would have to be (c + 2147483648LL) % 4294967296LL - 2147483648LL. Initially, the problem is just "the size of long is not the same on both compilers" with an explanation about how the output op gets does not come from nowhere. TBH, I deemed the above expression too complex and the one in my answer good enough for the scope of the question (not to mention my initial attempt at creating one was uglier than that: c % (2147483648LL) - ((c % (4294967296LL) == c % (2147483648LL)) ? 0 : 2147483648LL)).

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.