Bit Hacks 解读
Counting bits set by lookup table
static const unsigned char BitsSetTable256[256] =
{
# define B2(n) n, n+1, n+1, n+2
# define B4(n) B2(n), B2(n+1), B2(n+1), B2(n+2)
# define B6(n) B4(n), B4(n+1), B4(n+1), B4(n+2)
B6(0), B6(1), B6(1), B6(2)
};
unsigned int v; // count the number of bits set in 32-bit value v
unsigned int c; // c is the total bits set in v
// Option 1:
c = BitsSetTable256[v & 0xff] +
BitsSetTable256[(v >> 8) & 0xff] +
BitsSetTable256[(v >> 16) & 0xff] +
BitsSetTable256[v >> 24];
// Option 2:
unsigned char * p = (unsigned char *) &v;
c = BitsSetTable256[p[0]] +
BitsSetTable256[p[1]] +
BitsSetTable256[p[2]] +
BitsSetTable256[p[3]];
// To initially generate the table algorithmically:
BitsSetTable256[0] = 0;
for (int i = 0; i < 256; i++)
{
BitsSetTable256[i] = (i & 1) + BitsSetTable256[i / 2];
}
On July 14, 2009 Hallvard Furuseth suggested the macro compacted table.
解读:
查表法就是通过一个预先设计好的 table, 来直接获取当前所要分析的数字的结果.
那么这个 table 是怎么实现的呢? 我们拿 4 位二进制数举例.
1. 首先观察数字的二进制表示:
0000 0001 0010 0011 可以看到, 紫色加粗体数字表示: 每个数字的二进制所模式中, '1' 的个数是 0, 1, 1, 2 这个模式增长.
0100 0101 0110 0111
1000 1001 1010 1011
1100 1101 1110 1111
蓝色加粗体表示每 4 个数字的二进制位是 0, 1, 1, 2 关系增长.
2. 因此, 我们可以得到一个关于二进制位的个数的 table.
0, 1, 1, 2 \ 0+0, 0+1, 0+1, 0+2 \ x+0, x+1, x+1, x+2
1, 2, 2, 3 --- \ 1+0, 1+1, 1+1, 1+2 --- \ (x+1) + 0, (x+1) + 1, (x+1) + 1, (x+1) + 2
2, 3, 3, 4 --- / 2+0, 2+1, 2+1, 2+2 --- / (x+2) + 0, (x+2) + 1, (x+2) + 1, (x+2) + 2
3, 4, 4, 5 / 3+0, 3+1, 3+1, 3+2 / (x+3) + 0, (x+3) + 1, (x+3) + 1, (x+3) + 2
3. 通过 C 语言, 我们可以方便的使用迭代的思想完成剩下的工作, 但是为了在启动初期建立一个 table, 我们选择了 macro 来实现, 于是便有了:
static const unsigned char BitsSetTable256[256] =
{
# define B2(n) n, n+1, n+1, n+2
# define B4(n) B2(n), B2(n+1), B2(n+1), B2(n+2)
# define B6(n) B4(n), B4(n+1), B4(n+1), B4(n+2)
B6(0), B6(1), B6(1), B6(2)
};
Reverse all the 32 bits in 32-bit word
n = (n&0xaaaaaaaa) >> 1 | (n&0x55555555) << 1; // Swap the continous two bits: abcd -> badc
n = (n&0xcccccccc) >> 2 | (n&0x33333333) << 2; // Swap the two pairs: badc -> dcba
n = (n&0xf0f0f0f0) >> 4 | (n&0x0f0f0f0f) << 4; // Four...
n = (n&0xff00ff00) >> 8 | (n&0x00ff00ff) << 8; // Eight...
n = (n&0xffff0000) >> 16| (n&0x0000ffff) << 16; // Sixteen...
可见, 此方法既是递归的思想解决问题.
1. 若想交换 32 bits, 则需要交换两个 16 bits.
2. 若想交换每个 16 bits, 则将每个 16 bits 看作两个 8 bits.
3. 若想交换 8 bits, 我们只需交换两个 4 bits.
4. ......
5. 若想交换 2 bits, 我们可以通过位移运算实现.
此外, 这段代码并不受限于 n 是否是 signed type. 因为只有 >> 才会引起 signed extended, 然而所有 >> 运算之前, 都有 & 运算将 most significant bit 设置为 0.
如果对代码中的十六进制有疑惑, 考虑它们的二进制形式.
Conver a nibble number into an ASCII Hex character
n["0123456789ABCDEF"]; // 'n' should be ranged between [0, 16]
这段程序的难点其实在于语法, 让我们换一种熟悉的形式:
char HexChar[] = "0123456789ABCEDF";
// This our familiar syntax except the last one.
HexChar[n] == *(HexChar + n) == *(n + HexChar) == n[HexChar];
// This is equal to statement above.
"0123456789ABCEDF"[n] == *("0123456789ABCEDF" + n) == *(n + "0123456789ABCEDF") == n["0123456789ABCEDF"];
Swap the Values of two 32-bit variables
b = a ^ b;
a = a ^ b;
个人理解:
b = a - b ;
a = a - b ;
注意:
&(a) == &(b) || (a ^= b ^= a ^= b);
reference: http://graphics.stanford.edu/~seander/bithacks.html