This patch greatly improves the performance of QCMS transformations on x86 &
x86_64 systems. Some notes:
0. On 32-bit x86 systems it does runtime selection between non-SIMD, SSE, and
SSE2 code paths.
1. On x86_64 systems the SSE2 code path is always taken. The non-SIMD and SSE
code paths are left intact, but contemporary versions of the GCC and MSVC
compilers will see that they cannot be reached and optimize them away.
2. The execution of the SSE2 code path is reduced by 67%, relative to the
original Intel/Microsoft formatted ASM code. The relative performance is seen
on a Pentium4 (Northwood) 2.4GHz CPU with DDR1 RAM.
3. The SSE code path provides a 80% reduction in execution time, relative to
the non-SIMD code path. The relative performance is seen on a Pentium3
(Coppermine) 1.26GHz CPU with SDRAM.
4. The code has been split out into separate files so that it can be built
with different cflags (-msse, and -msse2) when using gcc.
5. Try to land again, this time with __attribute__((__force_align_arg_pointer__))
to avoid crashes on linux.
This patch greatly improves the performance of QCMS transformations on x86 &
x86_64 systems. Some notes:
0. On 32-bit x86 systems it does runtime selection between non-SIMD, SSE, and
SSE2 code paths.
1. On x86_64 systems the SSE2 code path is always taken. The non-SIMD and SSE
code paths are left intact, but contemporary versions of the GCC and MSVC
compilers will see that they cannot be reached and optimize them away.
2. The execution of the SSE2 code path is reduced by 67%, relative to the
original Intel/Microsoft formatted ASM code. The relative performance is seen
on a Pentium4 (Northwood) 2.4GHz CPU with DDR1 RAM.
3. The SSE code path provides a 80% reduction in execution time, relative to
the non-SIMD code path. The relative performance is seen on a Pentium3
(Coppermine) 1.26GHz CPU with SDRAM.
4. The code has been split out into separate files so that it can be built
with different cflags (-msse, and -msse2) when using gcc.
Makes the number of output entries produced by invert_lut() a parameter and
changes all callers to use a minimum of 256 entries when computing the inverse.