Some programing languages, notably Pascal, have a type of numbers called "real".
However, mathematically speaking, these types aren't real. For them to be "real", these types have to be able to represent any real number. Real numbers like 1/3 and irrationals, however, can't be represented in floating point. So why do some programing languages call these types "real"?
inttype doesn't really denote integers in most languages either. The use ofunsigned intinstead ofnatornaturalis a bit perplexing though.rationalwould lead to the expectation that exact rational arithmetic could be performed.