以64位整数或相邻位的有效方法

我想要做的是取一个由比特对组成的64位无符号整数,并从中创build一个包含0的32位整数,如果相应对中的两个比特都是0,否则为1。 换句话说,转换看起来像这样的东西:

01 00 10 11 

变成看起来像这样的东西

 1 0 1 1 

两个明显的解决scheme是蛮力循环或每个字节的查找表,然后做八个查找,并将它们结合成OR和位移的最终结果,但我相信应该有一个有效的方式。 我将这样做的C + + 64位整数,但如果有人知道有效的方式来做这个更短的整数,我相信我可以弄清楚如何扩展它。

这是一个可移植的C ++实现。 这似乎在我的短暂testing中工作。 解交织代码基于这个SO问题 。

 uint64_t calc(uint64_t n) { // (odd | even) uint64_t x = (n & 0x5555555555555555ull) | ((n & 0xAAAAAAAAAAAAAAAAull) >> 1); // deinterleave x = (x | (x >> 1)) & 0x3333333333333333ull; x = (x | (x >> 2)) & 0x0F0F0F0F0F0F0F0Full; x = (x | (x >> 4)) & 0x00FF00FF00FF00FFull; x = (x | (x >> 8)) & 0x0000FFFF0000FFFFull; x = (x | (x >> 16)) & 0x00000000FFFFFFFFull; return x; } 

gcc,clang和msvc都可以编译成大约30条指令。

从评论中可以做出修改。

  • 将第一行改为使用单个位掩码操作来仅select“奇数”位。

可能(?)改进的代码是:

 uint64_t calc(uint64_t n) { // (odd | even) uint64_t x = (n | (n >> 1)) & 0x5555555555555555ull; // single bits // ... the restdeinterleave x = (x | (x >> 1)) & 0x3333333333333333ull; // bit pairs x = (x | (x >> 2)) & 0x0F0F0F0F0F0F0F0Full; // nibbles x = (x | (x >> 4)) & 0x00FF00FF00FF00FFull; // octets x = (x | (x >> 8)) & 0x0000FFFF0000FFFFull; // halfwords x = (x | (x >> 16)) & 0x00000000FFFFFFFFull; // words return x; } 

使用BMI2指令集的 x86体系结构可能是最快的解决scheme:

 #include <stdint.h> #include <x86intrin.h> uint32_t calc (uint64_t a) { return _pext_u64(a, 0x5555555555555555ull) | _pext_u64(a, 0xaaaaaaaaaaaaaaaaull); } 

这汇总了5条指令。

如果你没有pext而且你仍然希望比平凡的方式做得更好,那么这个提取可以表示为一个对数(如果你用长度来概括它的位移):

 // OR adjacent bits, destroys the odd bits but it doesn't matter x = (x | (x >> 1)) & rep8(0x55); // gather the even bits with delta swaps x = bitmove(x, rep8(0x44), 1); // make pairs x = bitmove(x, rep8(0x30), 2); // make nibbles x = bitmove(x, rep4(0x0F00), 4); // make bytes x = bitmove(x, rep2(0x00FF0000), 8); // make words res = (uint32_t)(x | (x >> 16)); // final step is simpler 

附:

 bitmove(x, mask, step) { return x | ((x & mask) >> step); } 

repk就是这样我可以写更短的常量。 rep8(0x44) = 0x4444444444444444

另外,如果你有pext ,你可以只用其中的一个,这可能更快,至less更短:

 _pext_u64(x | (x >> 1), rep8(0x55)); 

好吧,让我们做这件事吧(可能是越野车):

 uint64_t x; uint64_t even_bits = x & 0xAAAAAAAAAAAAAAAAull; uint64_t odd_bits = x & 0x5555555555555555ull; 

现在,我原来的解决scheme是这样做的

 // wrong even_bits >> 1; unsigned int solution = even_bits | odd_bits; 

然而,正如杰克·艾德里(JackAidley)所指出的那样,虽然这些位alignment在一起,但并没有从中间去除空格!

谢天谢地,我们可以使用BMI2指令集中非常有用的_pext指令 。

u64 _pext_u64(u64 a, u64 m) – 将由掩码m指定的相应位位置处的位提取到dst中的连续低位; dst中的其余高位被设置为零。

 solution = _pext_u64(solution, odd_bits); 

或者,不要用&>>来分隔这些位,你可能只需要用原始数字的_pext和提供的掩码(将它们分成两个连续的32位数字),然后简单地or结果。

但是,如果你还没有BMI2的话,我很确定这个间隙的移除还是会涉及到一个循环。 可能比你最初的想法简单一点。

对LUT方法稍作改进(4个查找而不是8个):

按位计算或清除其他位。 然后交织字节对的比特以产生四个字节。 最后,通过一个256条目查找表重新排列四个字节中的位(映射到四字)

 Q= (Q | (Q << 1)) & 0xAAAAAAAAAAAAL; // OR in pairs Q|= Q >> 9; // Intertwine 4 words into 4 bytes B0= LUT[B0]; B1= LUT[B2]; B2= LUT[B4]; B3= LUT[B6]; // Rearrange bits in bytes 

困难的部分似乎是在收拾完之后把这些东西打包好。 指导是由:

 ored = (x | (x>>1)) & 0x5555555555555555; 

(假设int足够大,所以我们不必使用后缀)。 然后,我们可以按照步骤先包装它们两个,四个,四个等:

 pack2 = ((ored*3) >> 1) & 0x333333333333; pack4 = ((ored*5) >> 2) & 0x0F0F0F0F0F0F; pack8 = ((ored*17) >> 4) & 0x00FF00FF00FF; pac16 = ((ored*257) >> 8) & 0x0000FFFF0000FFFF; pack32 = ((ored*65537) >> 16) & 0xFFFFFFFF; // (or cast to uint32_t instead of the final & 0xFFF...) 

包装中发生的事情是,通过乘以我们将数据与移位的数据相结合。 在你的例子中,我们将有第一乘法(我表示来自掩码中的零作为o和其他0 (来自原始数据)):

  o1o0o1o1 x 11 ---------- o1o0o1o1 o1o0o1o1 ---------- o11001111 ^^ ^^ o10oo11o < these are bits we want to keep. 

我们也可以这样做:

 ored = (ored | (ored>>1)) & 0x3333333333333333; ored = (ored | (ored>>2)) & 0x0F0F0F0F0F0F0F0F; ored = (ored | (ored>>4)) & 0x00FF00FF00FF00FF; ored = (ored | (ored>>8)) & 0x0000FFFF0000FFFF; ored = (ored | (ored>>16)) & 0xFFFFFFFF; // ored = ((uint32_t)ored | (uint32_t)(ored>>16)); // helps some compilers make better code, esp. on x86 

我做了一些vector化的版本(godbolt链接仍然有一些大的devise注释) ,并做了一些基准,当这个问题是新的。 我会花更多的时间在上面,但是从来没有回到过去。 发布我有什么,所以我可以closures这个浏览器选项卡。 > <改进欢迎。

我没有Haswell我可以testing,所以我不能基准pextr版本反对这一点。 不过,我相信它速度更快,因为它只有4个快速指令。

  *** Sandybridge (i5-2500k, so no hyperthreading) *** 64bit, gcc 5.2 with -O3 -fno-tree-vectorize results: TODO: update benchmarks for latest code changes total cycles, and insn/clock, for the test-loop This measures only throughput, not latency, and a bottleneck on one execution port might make a function look worse in a microbench than it will do when mixed with other code that can keep the other ports busy. Lower numbers in the first column are better: these are total cycle counts in Megacycles, and correspond to execution time but they take frequency scaling / turbo out of the mix. (We're not cache / memory bound at all, so low core clock = fewer cycles for cache miss doesn't matter). AVX no AVX 887.519Mc 2.70Ipc 887.758Mc 2.70Ipc use_orbits_shift_right 1140.68Mc 2.45Ipc 1140.47Mc 2.46Ipc use_orbits_mul (old version that right-shifted after each) 718.038Mc 2.79Ipc 716.452Mc 2.79Ipc use_orbits_x86_lea 767.836Mc 2.74Ipc 1027.96Mc 2.53Ipc use_orbits_sse2_shift 619.466Mc 2.90Ipc 816.698Mc 2.69Ipc use_orbits_ssse3_shift 845.988Mc 2.72Ipc 845.537Mc 2.72Ipc use_orbits_ssse3_shift_scalar_mmx (gimped by stupid compiler) 583.239Mc 2.92Ipc 686.792Mc 2.91Ipc use_orbits_ssse3_interleave_scalar 547.386Mc 2.92Ipc 730.259Mc 2.88Ipc use_orbits_ssse3_interleave The fastest (for throughput in a loop) with AVX is orbits_ssse3_interleave The fastest (for throughput in a loop) without AVX is orbits_ssse3_interleave_scalar but obits_x86_lea comes very close. AVX for non-destructive 3-operand vector insns helps a lot Maybe a bit less important on IvB and later, where mov-elimination handles mov uops at register-rename time // Tables generated with the following commands: // for i in avx.perf{{2..4},{6..10}};do awk '/cycles / {c=$1; gsub(",", "", c); } /insns per cy/ {print c / 1000000 "Mc " $4"Ipc"}' *"$i"*;done | column -c 50 -x // Include 0 and 1 for hosts with pextr // 5 is omitted because it's not written 

几乎可以肯定的最好的版本(与BMI2)是:

 #include <stdint.h> #define LOBITS64 0x5555555555555555ull #define HIBITS64 0xaaaaaaaaaaaaaaaaull uint32_t orbits_1pext (uint64_t a) { // a|a<<1 compiles more efficiently on x86 than a|a>>1, because of LEA for non-destructive left-shift return _pext_u64( a | a<<1, HIBITS64); } 

这编译为:

  lea rax, [rdi+rdi] or rdi, rax movabs rax, -6148914691236517206 pext rax, rdi, rax ret 

所以它只有4个uops,关键path延迟是5c = 3(pext)+ 1(或)+ 1(lea)。 (Intel Haswell)。 吞吐量应该是每个周期一个结果(没有环路开销或加载/存储)。 常数的mov imm可以从循环中提升出来,因为它不会被破坏。 这意味着吞吐量方面,我们每个结果只需要3个融合域uop。

mov r, imm64并不理想。 (一个1uop广播立即32或8bit到一个64位章将是理想的,但没有这样的指示)。 数据存储器中的常量是一个选项,但是在指令stream中内联是很好的。 一个64b常量需要很多uop-cache空间,这使得带有两个不同mask的pext版本更糟。 用另外一个生成一个掩码可以帮助,但是: movabs / pext / not / pext / or ,但是这个仍然是5个insns,相比于lea技巧启用的4个。


最好的版本(与AVX)是:

 #include <immintrin.h> /* Yves Daoust's idea, operating on nibbles instead of bytes: original: Q= (Q | (Q << 1)) & 0xAAAAAAAAAAAAL // OR in pairs Q|= Q >> 9; // Intertwine 4 words into 4 bytes B0= LUT[B0]; B1= LUT[B2]; B2= LUT[B4]; B3= LUT[B6]; // Rearrange bits in bytes To operate on nibbles, Q= (Q | (Q << 1)) & 0xAAAAAAAAAAAAL // OR in pairs, same as before Q|= Q>>5 // Intertwine 8 nibbles into 8 bytes // pshufb as a LUT to re-order the bits within each nibble (to undo the interleave) // right-shift and OR to combine nibbles // pshufb as a byte-shuffle to put the 4 bytes we want into the low 4 */ uint32_t orbits_ssse3_interleave(uint64_t scalar_a) { // do some of this in GP regs if not doing two 64b elements in parallel. // esp. beneficial for AMD Bulldozer-family, where integer and vector ops don't share execution ports // but VEX-encoded SSE saves mov instructions __m128i a = _mm_cvtsi64_si128(scalar_a); // element size doesn't matter, any bits shifted out of element boundaries would have been masked off anyway. __m128i lshift = _mm_slli_epi64(a, 1); lshift = _mm_or_si128(lshift, a); lshift = _mm_and_si128(lshift, _mm_set1_epi32(0xaaaaaaaaUL)); // a = bits: hgfedcba (same thing in other bytes) // lshift = hg 0 fe 0 dc 0 ba 0 // lshift = s 0 r 0 q 0 p 0 // lshift = s 0 r 0 q 0 p 0 __m128i rshift = _mm_srli_epi64(lshift, 5); // again, element size doesn't matter, we're keeping only the low nibbles // rshift = s 0 r 0 q 0 p 0 (the last zero ORs with the top bit of the low nibble in the next byte over) __m128i nibbles = _mm_or_si128(rshift, lshift); nibbles = _mm_and_si128(nibbles, _mm_set1_epi8(0x0f) ); // have to zero the high nibbles: the sign bit affects pshufb // nibbles = 0 0 0 0 qspr // pshufb -> 0 0 0 0 srqp const __m128i BITORDER_NIBBLE_LUT = _mm_setr_epi8( // setr: first arg goes in the low byte, indexed by 0b0000 0b0000, 0b0100, 0b0001, 0b0101, 0b1000, 0b1100, 0b1001, 0b1101, 0b0010, 0b0110, 0b0011, 0b0111, 0b1010, 0b1110, 0b1011, 0b1111 ); __m128i ord_nibbles = _mm_shuffle_epi8(BITORDER_NIBBLE_LUT, nibbles); // want 00 00 00 00 AB CD EF GH from: // ord_nibbles = 0A0B0C0D0E0F0G0H // 0A0B0C0D0E0F0G0 H(shifted out) __m128i merged_nibbles = _mm_or_si128(ord_nibbles, _mm_srli_epi64(ord_nibbles, 4)); // merged_nibbles= 0A AB BC CD DE EF FG GH. We want every other byte of this. // 7 6 5 4 3 2 1 0 // pshufb is the most efficient way. Mask and then packuswb would work, but uses the shuffle port just like pshufb __m128i ord_bytes = _mm_shuffle_epi8(merged_nibbles, _mm_set_epi8(-1,-1,-1,-1, 14,12,10,8, -1,-1,-1,-1, 6, 4, 2,0) ); return _mm_cvtsi128_si32(ord_bytes); // movd the low32 of the vector // _mm_extract_epi32(ord_bytes, 2); // If operating on two inputs in parallel: SSE4.1 PEXTRD the result from the upper half of the reg. } 

没有AVX的最好的版本是一个微小的修改,一次只能使用一个input,只使用SIMD进行混洗。 理论上使用MMX而不是SSE会更有意义,尤其是, 如果我们关心64b pshufb快的第一代Core2,而128b pshufb不是单周期。 无论如何,编译器在MMX内部函数上做的不好。 另外,EMMS很慢。

 // same as orbits_ssse3_interleave, but doing some of the math in integer regs. (non-vectorized) // esp. beneficial for AMD Bulldozer-family, where integer and vector ops don't share execution ports // VEX-encoded SSE saves mov instructions, so full vector is preferable if building with VEX-encoding // Use MMX for Silvermont/Atom/Merom(Core2): pshufb is slow for xmm, but fast for MMX. Only 64b shuffle unit? uint32_t orbits_ssse3_interleave_scalar(uint64_t scalar_a) { uint64_t lshift = (scalar_a | scalar_a << 1); lshift &= HIBITS64; uint64_t rshift = lshift >> 5; // rshift = s 0 r 0 q 0 p 0 (the last zero ORs with the top bit of the low nibble in the next byte over) uint64_t nibbles_scalar = (rshift | lshift) & 0x0f0f0f0f0f0f0f0fULL; // have to zero the high nibbles: the sign bit affects pshufb __m128i nibbles = _mm_cvtsi64_si128(nibbles_scalar); // nibbles = 0 0 0 0 qspr // pshufb -> 0 0 0 0 srqp const __m128i BITORDER_NIBBLE_LUT = _mm_setr_epi8( // setr: first arg goes in the low byte, indexed by 0b0000 0b0000, 0b0100, 0b0001, 0b0101, 0b1000, 0b1100, 0b1001, 0b1101, 0b0010, 0b0110, 0b0011, 0b0111, 0b1010, 0b1110, 0b1011, 0b1111 ); __m128i ord_nibbles = _mm_shuffle_epi8(BITORDER_NIBBLE_LUT, nibbles); // want 00 00 00 00 AB CD EF GH from: // ord_nibbles = 0A0B0C0D0E0F0G0H // 0A0B0C0D0E0F0G0 H(shifted out) __m128i merged_nibbles = _mm_or_si128(ord_nibbles, _mm_srli_epi64(ord_nibbles, 4)); // merged_nibbles= 0A AB BC CD DE EF FG GH. We want every other byte of this. // 7 6 5 4 3 2 1 0 // pshufb is the most efficient way. Mask and then packuswb would work, but uses the shuffle port just like pshufb __m128i ord_bytes = _mm_shuffle_epi8(merged_nibbles, _mm_set_epi8(0,0,0,0, 0,0,0,0, 0,0,0,0, 6,4,2,0)); return _mm_cvtsi128_si32(ord_bytes); // movd the low32 of the vector } 

对不起,主要是代码转储的答案。 在这一点上,我不觉得花费大量的时间来讨论的东西比评论已经做的更多。 有关指导,请参阅http://agner.org/optimize/以针对特定的微体系结构进行优化。; 还有其他资源的x86维基。