1

I'm writing a program that converts a binary string to decimal. I wanted to validate my output before I get really started on this method. I have the following code:

int get_val() { int sum =0; for(int num_bits = size; num_bits>0; num_bits--) { printf("String sub %i is %i\n", num_bits, int(bin[num_bits])); } } 

When I input a string of 16 zeros, I get the following output:

String sub 16 is 24 String sub 15 is 0 String sub 14 is 0 String sub 13 is 0 String sub 12 is 23 String sub 11 is 0 String sub 10 is 0 String sub 9 is 0 String sub 8 is 22 String sub 7 is 0 String sub 6 is 0 String sub 5 is 0 String sub 4 is 21 String sub 3 is 0 String sub 2 is 0 String sub 1 is 0 

Why would I bet getting different values if I input all zeros?

EDIT: bin is "0000000000000000"

8
  • 2
    You seem to be missing some code. Commented Sep 25, 2010 at 21:45
  • 1
    What's the bin array about? Commented Sep 25, 2010 at 21:45
  • 2
    Could you please post the rest of the code, especially what bin[] is? Commented Sep 25, 2010 at 21:45
  • 2
    If you want to convert a binary string to int, why not use: (int)strtol(bit_string, NULL, 2);? Unless this is homework (or something similar) writing your own routine seems pretty pointless. Commented Sep 25, 2010 at 21:58
  • 4
    Your edit doesn't help. Show us how your bin is declared and initialized. And what's size? Commented Sep 25, 2010 at 22:00

2 Answers 2

1

As long as the question isn't updated, perhaps this example code helps. It converts a binary string into an integer. I tried to keep as much of your code and variable names as possible.

#include <stdio.h> #include <stdlib.h> #include <string> using namespace std; int main() { string bin = "000111010"; int size = bin.length(); int sum = 0; for(int num_bits = 1; num_bits <= size; num_bits++) { sum <<= 1; sum += bin[num_bits - 1] - '0'; } printf("Binary string %s converted to integer is: %i\n", bin.c_str(), sum); } 

As already said in the comments, the main trick here is to convert the ASCII characters '0' and '1' to the integers 0 and 1 which is done by subtracting the value of '0'. Also, I changed the traverse order of the string because this way, you can shift the integer after each bit and always set the value of the currently lowest bit.

Sign up to request clarification or add additional context in comments.

Comments

0

Short answer, you wouldn't.

Long answer, there are a few issues with this. The first big issue is that if we assume bin is a standard array of characters of length "size", then your first print is invalid. The array index is off by 1. Consider the code example:

int size = 16; char * bin = new char[size]; for(int i=0; i<size; i++) { bin[i] = 0; } for(int num_bits = size; num_bits>0; num_bits--) { printf("String sub %i is %i\n", num_bits, int(bin[num_bits])); } 

Which produces:

String sub 16 is -3 String sub 15 is 0 String sub 14 is 0 String sub 13 is 0 String sub 12 is 0 String sub 11 is 0 String sub 10 is 0 String sub 9 is 0 String sub 8 is 0 String sub 7 is 0 String sub 6 is 0 String sub 5 is 0 String sub 4 is 0 String sub 3 is 0 String sub 2 is 0 String sub 1 is 0 

Judging by the actual output you got, I'm guessing you did something like:

int size=16; int * ints = new int[size]; char * bin; //Fill with numbers, not zeros, based on the evidence for(int i=0; i<size; i++) { ints[i] = 20 + i; } //Copy over to character buffer bin = (char*)(void*)&(ints[0]); for(int num_bits = size; num_bits>0; num_bits--) { printf("String sub %i is %i\n", num_bits, int(bin[num_bits])); } 

That explains the output you saw perfectly. So, I'm thinking your input assumption, that bin points to an array of character zeros, is not true. There are a few really big problems with this, assuming you did something like that.

  1. Your assumption that the memory is all zero is wrong, and you need to explain that or post the real code and we will
  2. You can't just treat a memory buffer of integers as characters - a string is made up of one byte characters (typically), integers are 4 bytes, typically
  3. Arrays in C++ start at 0, not 1
  4. Casting a character to an integer [ int('0') ] does not intelligently convert - the integer that comes out of that is a decimal 48, not a decimal 0 (there is a function atoi that will do that, as well as other better ones, or the other suggestion to use subtraction)

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.