Timeline for Gray codes addition — Take 2
Current License: CC BY-SA 3.0
12 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| May 23, 2017 at 12:40 | history | edited | CommunityBot | replaced http://stackoverflow.com/ with https://stackoverflow.com/ | |
| Apr 21, 2015 at 13:26 | comment | added | Morwenn | Actually, as noted in the description, the two calls to operator+ in the algorithm are recursive calls, not regular additions. My assumption is that the regular binary addition is optimized at hardware level, but there may be other algorithms more efficient specialized for Gray codes at hardware level than a double conversion. While I don't know of such hardware, finding new algorithms seemed like a good idea. So of course, any C++ algorithm should be slower than the double conversion, but I still like exploring the other potential algorithms :) | |
| Apr 21, 2015 at 13:09 | comment | added | Emily L. | The thought experiment itself is interesting. A computer with Gray representation would still have to use a binary representation (just coded as Grey), in fact your algorithm assumes this. There is nothing that disqualifies either algorithm from being implemented in hardware on such a computer. In fact your algorithm does rely on integer base-2 addition as well because I can count two such operators in your implementation. I do not see what disqualifies either algorithm from the thought experiment. | |
| Apr 21, 2015 at 9:26 | comment | added | Morwenn | I will try to add some context to the thought experiment behind these Gray addition algorithms: at some point I tried to imagine what it would be if computers weren't using the regular binary representation for integers but used a Gray representation instead (for reasons that do not matter, it's a thought experiment). Therefore, I was trying to devise an algorithm that would not rely on a regular integer representation and therefore cannot use operations on bits that wouldn't make sense in a Gray code-based computer. | |
| Apr 21, 2015 at 9:22 | comment | added | Morwenn | To me, it's just an inlined Gray to binary conversion followed by an integer addition and a binary to Gray conversion. It uses the same Gray-to-binary algorithm that I use in my lib (except that I have a loop, but it depends on std::numeric_limits<>::digits so the compiler gets rid of it). I know for sure that my algorithm does not perform the same thing for it does not rely on the integer addition :) | |
| Apr 21, 2015 at 8:48 | comment | added | Emily L. | No. It's a clever bit hack to get rid of the loops and replace them with constant time. But if you can't tell, does it matter? If it does, how can you be sure that your devised algorithm isn't really just an implicit conversion back and forth that you don't realise? At the end of the day arithmetic operations are only defined for numbers, not for codes. Any algorithm that takes two codes and produces a code that represents the sum of the decoded input codes, has done an implicit conversion at some point, otherwise the result doesn't make sense. It's just a matter of how obfuscated it is. | |
| Apr 21, 2015 at 7:12 | comment | added | Morwenn | ow is the new algorithm different than binaryToGray(grayToBinary(x) + grayToBinary(y));? | |
| Apr 20, 2015 at 11:42 | history | edited | Emily L. | CC BY-SA 3.0 | Constant time for the win. |
| Apr 20, 2015 at 11:21 | comment | added | Emily L. | Nevertheless my statement about branch miss-prediction still applies here. The loop in isomsb has a fixed number of iterations and it should have been unrolled by your compiler, if it hasn't you can unroll and optimize it by hand by specializing the template (check your asm). I haven't studied your algorithm in detail but if you can reduce the number of special cases (branches checking for base2==base2 at the top) you might gain some speed. | |
| Apr 20, 2015 at 11:10 | comment | added | Morwenn | Ok, it may not have been all that clear actually. Sorry about that -_- | |
| Apr 20, 2015 at 11:05 | comment | added | Morwenn | Er, well, yeah, I know that already. The point of my question is that I am trying to find another algorithm and optimize it, just for the sake of it. If I really needed to use Gray codes in an actual project, I wouldn't try to find such an algorithm, I would use the double conversion to/from regular binary. I may have forgotten to mention in for "take 2", but I am pretty sure that I mentioned it in the original question. | |
| Apr 20, 2015 at 9:12 | history | answered | Emily L. | CC BY-SA 3.0 |