-
Notifications
You must be signed in to change notification settings - Fork 14.6k
Closed
Labels
Description
Bugzilla Link | 34724 |
Version | trunk |
OS | Linux |
CC | @alexey-bataev,@efriedma-quic,@hfinkel,@RKSimon,@rotateright,@ZviRackover |
Extended Description
Prior to r310260, the compiler was optimizing some add/sub + shuffle patterns to horizontal add/sub instructions, but no longer seems to do so.
Consider this example:
__attribute__((noinline))
__m128 add_ps_001(__m128 a, __m128 b) {
__m128 r = (__m128){ a[0] + a[1], a[2] + a[3], b[0] + b[1], b[2] + b[3] };
return __builtin_shufflevector(r, a, -1, 1, 2, 3);
}
When compiled with the options "-S -O2 -march=bdver2" using a compiler prior to r310260, the compiler would generate the following assembly:
vmovaps (%rcx), %xmm0
vhaddps (%rdx), %xmm0, %xmm0
retq
After r310260, the compiler is now generating the following code for the same function:
vmovaps (%rdx), %xmm0
vmovddup 8(%rcx), %xmm1 # xmm1 = mem[0,0]
vinsertps $28, %xmm0, %xmm1, %xmm2 # xmm2 = xmm1[0],xmm0[0],zero,zero
vinsertps $76, %xmm1, %xmm0, %xmm1 # xmm1 = xmm1[1],xmm0[1],zero,zero
vaddps %xmm1, %xmm2, %xmm1
vpermilpd $1, %xmm0, %xmm2 # xmm2 = xmm0[1,0]
vpermilps $231, %xmm0, %xmm0 # xmm0 = xmm0[3,1,2,3]
vaddss %xmm0, %xmm2, %xmm0
vpermilps $208, %xmm1, %xmm1 # xmm1 = xmm1[0,0,1,3]
vinsertps $48, %xmm0, %xmm1, %xmm0 # xmm0 = xmm1[0,1,2],xmm0[0]
retq