Algorithm Library

...and strictly for my own personal use when I am out and about
writing programs. I may loosen the restrictions later. For
now, this makes some of my services worth something. Email me at jmichae3 att yahoo dott com if you want to use it.

[Bin2Gray] | [Gray2Bin] | [PrintBits]

binary-to-gray-code algorithm

Someday I'll work on analyzing the bit patterns to determine how to make such a device implementable in HW. I think I am close. I am sure it's been done, but I don't see it published anywhere yet. Strange how something as useful as this is not in the textbooks I had in college.

```unsigned LeftmostOneBit(unsigned n, unsigned bitLength)
{
//work from msb to lsb, storing bits as '1' or '0' in a string.
if ((n & bitMask) != 0U) {
}
}
return (unsigned)0;  //should never get here unless n=0.
}

unsigned RightmostOneBit(unsigned n, unsigned bitLength)
{
//work from lsb to msb, storing bits as '1' or '0' in a string.
if ((n & bitMask) != 0U) {
}
}
return (unsigned)0;  //should never get here unless n=0.
}

unsigned BinaryToGrayCode(unsigned n, unsigned bitLength)
{
//not recommended for high-speed operations - very bit-manip intensive.
//this IS recommended for creating in HW - should be VERY fast.
//simply move from lsb to msb-1, where newcurrentbit = currentBit XOR nextbit
//notice the msb is always copied without modification.
//buggy for now.
unsigned result = 0;

}
return result;
}

//the fastest algorithm yet.
unsigned BinaryToGrayCode2(unsigned n)
{
return n ^ (n >> 1);
}

//the fastest algorithm yet.
unsigned GrayCodeToBinary2(unsigned gray, unsigned bin)
{
return gray ^ (bin >> 1);
}

//gray code test rig
int main(int argc, char* argv[])
{
int i;
for (i=0; i < 32; i++) {
printbits(i,5);putchar(' ');
printbits(BinaryToGrayCode2(i),5);putchar(' ');
printbits(GrayCodeToBinary2(BinaryToGrayCode2(i), i),5);printf("\n");
}
return 0;
}

```

This concept in HW doesn't really need a shift register, just bits in the right places.

and I just saw I can optimize by eliminating the redundant msb xor.
It is probably prone to glitches though.

gray-code-to-binary algorithm

```//the fastest algorithm yet.
unsigned GrayCodeToBinary2(unsigned g, unsigned b)
{
return g ^ (b >> 1);
}
```

Somehow I think that this gray-code-to-binary algorithm and associated circuit I wrote is totally bogus. it seems that once the bits are in the grey code, I have not yet figured out a mathematical way to bring the gray code back to binary. sorry. so without further ado, here is C code from wikipedia gray code article. I am sure it works. the article does not seem to provide a fast binary-to-gray-code algorithm.

```long inverseGray(long n) {
long ish, ans, idiv;
ish = 1;
ans = n;
while(true) {
idiv = ans >> ish;
ans ^= idiv;
if (idiv <= 1 || ish == 32)
return ans;
ish <<= 1; // double number of shifts next time
}
}
```

printbits

```void printbits(unsigned n, unsigned numBits)
{
int index = 0;
static char bitstr[129];	//allocate space for null char also
//work from msb to lsb, storing bits as '1' or '0' in a string.
for (unsigned bit = 1 << (numBits - 1); bit > 0; bit >>= 1) {
if ((n & bit) == unsigned(0)) {
bitstr[index] = '0';
} else {
bitstr[index] = '1';
}
index++;
bitstr[index] = '\0';
}
#ifdef NO_IOSTREAM
printf("%s",bitstr);
#else
cout<<bitstr;
#endif
}

void printbit(unsigned n) { putchar('0'+n);}

void printbits(unsigned n, int nbits) {
int i;
for (i=nbits-1; i>=0; i--) {
printbit( (n & (1<<i)) >>i);
}
}
```
A challenge is to convert to decimal from a bitstream as fast as possible.
That means divides should be avoided.
What I need is a very fast way to divide long bitstreams and get a remainder < 10 (decimal) and divisor result.
If there is a better way than that or conversion to Packed BCD arithmetic, I would like to see that.
Q: for a given string of bits of length N, how many bits worth of precision are need to handle adding (worst case) 66 to the packed BCD to produce a carry? Q: How long will the result be? (A: 1 extra bit) Assume we will process in [multiple] chunks of 4 bits. Worst case:
binary decimal #bits #dec. digits ceil(#dec. digits) #bits*log(2) / log(10) ceil(#bits*log(2) / log(10))
1 1 1 0.25 1 0.3 1
11 3 2 0.5 1 0.6 1
111 7 3 0.75 1 0.9 1
1111 15 4 1.25 2 1.2 2
11111111 255 8 2.5 3 2.41 3
111111111111 4095 12 3.75 4 3.61 4
1111111111111111 65535 16 4.75 5 4.82 5
1111111111111111111 524287 20 5.75 6 6.02 7
11111111111111111111 1048575 21 6.25 7 6.32 7
111111111111111111111 2097152 22 6.5 7 6.62 7
I am told nlog(2)/log(10) approximates the number of digits needed to represent a binary number as decimal. I find this to be true, but unfortunately not exact enough. With the ceiling() function this would be true enough to be useful. but the problem is logarithms take so long to execute on a processor and are not easily represented in high-speed HW. (...or is there?) I heard of someone who attended PCC Sylvania who short-circuited the newton-raphson method and came out with something much faster.
Revisions:
July 1, 2002: fixed & and < and > characters in the C code, which were not HTML-ized inside PRE tags.
anyone who copied the code will find that their code didn't work.
corrected erroneous comments and termination conditions in rightmostonebit(). Don't know what I was thinking.