I've been experimenting with making a binary calculator lately, and I've got the addition and subtraction working. I now want to start making a multiplier, but I have no idea where to even start. I know how to multiply two binary numbers. After all, it's the same as how we used to learn how to multiply. Observe this -5 * 7 operation as an example.
1011 x 0111 ------------ 11111011 1111011X 111011XX ------------ (10)11011101 This would result in -35 written as decimal, which is correct. The first two bits are discarded as a result of overflow, hence the brackets. This works fine, but I don't see an obvious and concise way to put this in a logic circuit. Which leads me to my first question:
How do I multiply 2 signed (2's complement) 4 bit numbers in/using a logic circuit?
If I find out a way to do that, I can try to fit it into my "calculator". But there's another problem: I have two signed 8 bit inputs but only one mere 8 bit output to work with. So I'll need some way of knowing that the output (or input(s)) is too big to fit in an 8 bit number, so my second (but less important) question is:
How do I know if the output is too big to fit in an 8 bit number? And, is this already possible using the multiplier we just made?