-1
int x =5; char y = x + '0'; 

Apparently this bit of code converts an integer to a string. But I don't understand how it works behind the scenes. I'm a newbie. Could someone explain how adding a string 0 with an int converts it into a string?

2

3 Answers 3

1

In

char y = x + '0';

put the character code (ASCII or whatever character code your system implements) of '0' and add 5 to it then again convert that integer to an ASCII(/UTF-8 or any other character encoding) character that's what your output(5) will be.

Behind the scenes this is what's happening:

y = 5 + 48 ;for ASCII/UTF-8 character code of '0' is 48 y = 53 

and ASCII/UTF-8 char for 53 is '5'.

It does not convert a string to an integer. It just gives you the equivalent character. We can convert an int(0-127) to a char or vice-versa, output will be an ASCII/UTF-8 value. char are stored as short int in memory. i.e. they are 8-bit int values.

Sign up to request clarification or add additional context in comments.

Comments

0

You have a bit of misunderstanding here. The variable y isn't string, it's a char. string is an array of chars. What you are actually doing here is:

char y = x + 48; //48 is the ASCII encoding of 0 

This is somewhat like:

char y = 50; printf("%c\n", y); //prints 2 as a char 

This is because chars are encoded with a specific value. Eg. 'a' is 97. 'b' is 98 and so on. The same way '0''s code is 48.

 char y = x + '0' 

means is adding the encoded value of 0(ie 48) to x. And because 0 is the first number, the other number's code will be right next to it. This is what I mean:

48 -> This is 0 49 -> This is 1 50 -> This is 2 51 -> This is 3 52 -> This is 4 53... and so on 

You may notice that in order to get eg. 1, we need to add one to zero. 48 + 1 = 49 and 49 is the encoding for 1. This is true for all numbers.

Note: I've used ASCII encoding for explaining but there are others and it should work on most on weird encodings.

1 Comment

Re “it should work on most on weird encodings”: Adding the value of a digit (0-9) to '0' works in all C implementations that conform to the C standard, because the standard requires the character codes for the digits to be consecutive.
-1

'0' is an int, not a string. String literals have double quotes (e.g. "Hi!"). Character literals (which are integers) have single quotes (e.g. 'A').

x + '0' evaluates to int since both the operands are int.

The integral value of a character literal is its ASCII value : https://en.wikipedia.org/wiki/ASCII#Character_set
From the ascii table, the integer value of '0' is 48.

Now, coming to the expression in the question:
x is 5, '0' is 48.
Therefore y = 48+5 (i.e 53).

When you print y as a character using printf("%c", y), the character whose ASCII value is 53 is printed. i.e. '5'

Why adding '0' to a single digit integer seems to convert it into an integer is because '0' to '9' are one after another in the ASCII table. This means '9' (57) is exactly 9 spaces after '0' (48).

This is why adding double digit integers to '0' will not work. E.g. '0' + 10 will be ':' (ASCII value 58).

To properly convert larger strings to integers, use the atoi function (https://man7.org/linux/man-pages/man3/atoi.3.html). e.g.

int a = atoi("1023"); // a is an integer with the value 1023 

4 Comments

That means it can take values from -128 to 127 that's completely wrong. char can be signed or unsigned and can contain more than 8 bits of data. So char can contain values in the range [0, 2^CHAR_BIT] if it's unsigned, and [-2^(CHAR_BIT/2), 2^(CHAR_BIT/2) - 1] if signed
'0' is a character Actually it's an integer: "An integer character constant has type int. ..."
Thanks for the feedback @phuclv and @Andrew ! I've always assumed that char is signed by default and takes exactly 8 bits (Since uint8_t typedefs to unsigned char. Did not know that even that is implementation specific). Also did not know that '0' is an int. (Why is that? Wouldn't it be more efficient to use char?) Anyways, I'm updating the answer to fix those issues. If it still does not provide value, please let me know. I will delete it.
@RehanVipin uintN_t are optional types and uint8_t only exists if CHAR_BIT == 8. Only (u)int_fastN_t and (u)int_leastN_t are required. I don't know why character C decides that character literals should be int, but probably because of multicharacter literals like 'abcd' which were commonly used for signatures like FourCC, and because of integer promotion. But in C++ '0' is char: Why are C character literals ints instead of chars?

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.