Update pixman to ad3cbfb073fc325e1b3152898ca71b8255675957

Alan Coopersmith (1):
      Sun's copyrights belong to Oracle now

Alexandros Frantzis (2):
      Add simple support for the r8g8b8a8 and r8g8b8x8 formats.
      Add support for the r8g8b8a8 and r8g8b8x8 formats to the tests.

Andrea Canciani (14):
      Improve precision of linear gradients
      Make classification consistent with rasterization
      Remove unused enum value
      Fix an overflow in the new radial gradient code
      Remove unused stop_range field
      Fix opacity check
      Improve conical gradients opacity check
      Improve handling of tangent circles
      Add a test for radial gradients
      Fix compilation on Win32
      test: Fix tests for compilation on Windows
      test: Add Makefile for Win32
      Do not include unused headers
      test: Silence MSVC warnings

Cyril Brulebois (2):
      Fix argument quoting for AC_INIT.
      Fix linking issues when HAVE_FEENABLEEXCEPT is set.

Jon TURNEY (2):
      Plug another leak in alphamap test
      Remove stray #include <fenv.h>

Rolland Dudemaine (4):
      test: Fix for mismatched 'fence_malloc' prototype/implementation
      Correct the initialization of 'max_vx'
      test: Use the right enum types instead of int to fix warnings
      Fix "variable was set but never used" warnings

Scott McCreary (1):
      Added check to find pthread on Haiku.

Siarhei Siamashka (62):
      Fixed broken configure check for __thread support
      Do CPU features detection from 'constructor' function when compiled with gcc
      ARM: fix 'vld1.8'->'vld1.32' typo in add_8888_8888 NEON fast path
      ARM: NEON: source image pixel fetcher can be overrided now
      ARM: nearest scaling support for NEON scanline compositing functions
      ARM: macro template in C code to simplify using scaled fast paths
      ARM: performance tuning of NEON nearest scaled pixel fetcher
      ARM: NEON optimization for scaled over_8888_8888 with nearest filter
      ARM: NEON optimization for scaled over_8888_0565 with nearest filter
      ARM: NEON optimization for scaled src_8888_0565 with nearest filter
      ARM: NEON optimization for scaled src_0565_8888 with nearest filter
      ARM: optimization for scaled src_0565_0565 with nearest filter
      C fast path for a1 fill operation
      ARM: added 'neon_composite_over_n_8_8' fast path
      ARM: introduced 'fetch_mask_pixblock' macro to simplify code
      ARM: better NEON instructions scheduling for over_n_8_0565
      ARM: added 'neon_composite_over_8888_n_0565' fast path
      ARM: reuse common NEON code for over_{n_8|8888_n|8888_8}_0565
      ARM: added 'neon_composite_over_0565_n_0565' fast path
      ARM: added 'neon_composite_add_8888_8_8888' fast path
      ARM: better NEON instructions scheduling for add_8888_8888_8888
      ARM: added 'neon_composite_add_n_8_8888' fast path
      ARM: added 'neon_composite_add_8888_n_8888' fast path
      ARM: added flags parameter to some asm fast path wrapper macros
      ARM: added 'neon_composite_in_n_8' fast path
      ARM: added 'neon_src_rpixbuf_8888' fast path
      Fix for potential unaligned memory accesses
      COPYING: added Nokia to the list of copyright holders
      Revert "Fix "syntax error: empty declaration" warnings."
      Fix for "syntax error: empty declaration" Solaris Studio warnings
      Workaround for a preprocessor issue in old Sun Studio
      Bugfix for a corner case in 'pixman_transform_is_inverse'
      Make 'fast_composite_scaled_nearest_*' less suspicious
      A new configure option --enable-static-testprogs
      ARM: do /proc/self/auxv based cpu features detection only in linux
      The code in 'bitmap_addrect' already assumes non-null 'reg->data'
      test: affine-test updated to stress 90/180/270 degrees rotation more
      New flags for 90/180/270 rotation
      C fast paths for a simple 90/270 degrees rotation
      Use const modifiers for source buffers in nearest scaling fast paths
      test: Extend scaling-test to support a8/solid mask and ADD operation
      Support for a8 and solid mask in nearest scaling main loop template
      Better support for NONE repeat in nearest scaling main loop template
      ARM: new macro template for using scaled fast paths with a8 mask
      ARM: NEON optimization for nearest scaled over_8888_8_0565
      ARM: NEON optimization for nearest scaled over_0565_8_0565
      SSE2 optimization for nearest scaled over_8888_n_8888
      Ensure that tests run as the last step of a build for 'make check'
      Main loop template for fast single pass bilinear scaling
      test: check correctness of 'bilinear_pad_repeat_get_scanline_bounds'
      SSE2 optimization for bilinear scaled 'src_8888_8888'
      ARM: NEON optimization for bilinear scaled 'src_8888_8888'
      ARM: use prefetch in nearest scaled 'src_0565_0565'
      ARM: common macro for nearest scaling fast paths
      ARM: assembly optimized nearest scaled 'src_8888_8888'
      ARM: new bilinear fast path template macro in 'pixman-arm-common.h'
      ARM: NEON: common macro template for bilinear scanline scalers
      ARM: use common macro template for bilinear scaled 'src_8888_8888'
      ARM: NEON optimization for bilinear scaled 'src_8888_0565'
      ARM: NEON optimization for bilinear scaled 'src_0565_x888'
      ARM: NEON optimization for bilinear scaled 'src_0565_0565'
      ARM: a bit faster NEON bilinear scaling for r5g6b5 source images

Søren Sandmann Pedersen (79):
      Remove the class field from source_image_t
      Pre-release version bump to 0.19.6
      Post-release version bump to 0.19.7
      Pre-release version bump to 0.20.0
      Post-release version bump to 0.20.1
      Version bump 0.21.1.
      COPYING: Stop saying that a modification is currently under discussion.
      Remove workaround for a bug in the 1.6 X server.
      [mmx] Mark some of the output variables as early-clobber.
      Delete the source_image_t struct.
      Generate {a,x}8r8g8b8, a8, 565 fetchers for nearest/affine images
      Pre-release version bump
      Post-release version bump to 0.21.3
      test: Make composite test use some existing macros instead of defining its own
      Add enable_fp_exceptions() function in utils.[ch]
      Extend gradient-crash-test
      test: Move palette initialization to utils.[ch]
      test/utils.c: Initialize palette->rgba to 0.
      Make the argument to fence_malloc() an int64_t
      Add a stress-test program.
      Add a test compositing with the various PDF operators.
      Fix divide-by-zero in set_lum().
      sse2: Skip src pixels that are zero in sse2_composite_over_8888_n_8888()
      Add iterators in the general implementation
      Move initialization of iterators for bits images to pixman-bits-image.c
      Eliminate the _pixman_image_store_scanline_32/64 functions
      Move iterator initialization to the respective image files
      Virtualize iterator initialization
      Use an iterator in pixman_image_get_solid()
      Move get_scanline_32/64 to the bits part of the image struct
      Allow NULL property_changed function
      Consolidate the various get_scanline_32() into get_scanline_narrow()
      Linear: Optimize for horizontal gradients
      Get rid of the classify methods
      Add direct-write optimization back
      Skip fetching pixels when possible
      Turn on testing for destination transformation
      Fix destination fetching
      Fix dangling-pointer bug in bits_image_fetch_bilinear_no_repeat_8888().
      Pre-release version bump to 0.21.4
      Post-release version bump to 0.21.5
      Print a warning when a development snapshot is being configured.
      Move fallback decisions from implementations into pixman-cpu.c.
      Add a test for over_x888_8_0565 in lowlevel_blt_bench().
      Add SSE2 fetcher for x8r8g8b8
      Add SSE2 fetcher for a8
      Improve performance of sse2_combine_over_u()
      Add SSE2 fetcher for 0565
      Add pixman-conical-gradient.c to Makefile.win32.
      Move all the GTK+ based test programs to a new subdir, "demos"
      Add @TESTPROGS_EXTRA_LDFLAGS@ to AM_LDFLAGS
      test/Makefile.am: Move all the TEST_LDADD into a new global LDADD.
      Add pixman_composite_trapezoids().
      Add a test program for pixman_composite_trapezoids().
      Add support for triangles to pixman.
      Add a test program, tri-test
      Optimize adding opaque trapezoids onto a8 destination.
      Add new public function pixman_add_triangles()
      Avoid marking images dirty when properties are reset
      In pixman_image_set_transform() allow NULL for transform
      Coding style:  core_combine_in_u_pixelsse2 -> core_combine_in_u_pixel_sse2
      sse2: Convert all uses of MMX registers to use SSE2 registers instead.
      sse2: Delete unused MMX functions and constants and all _mm_empty()s
      sse2: Don't compile pixman-sse2.c with -mmmx anymore
      sse2: Remove all the core_combine_* functions
      sse2: Delete obsolete or redundant comments
      sse2: Remove pixman-x64-mmx-emulation.h
      sse2: Minor coding style cleanups.
      Delete pixman-x64-mmx-emulation.h from pixman/Makefile.am
      Minor fix to the RELEASING file
      Pre-release version bump to 0.21.6
      Post-release version bump to 0.21.7
      test: In image_endian_swap() use pixman_image_get_format() to get the bpp.
      test: Do endian swapping of the source and destination images.
      In delegate_{src,dest}_iter_init() call delegate directly.
      Fill out parts of iters in _pixman_implementation_{src,dest}_iter_init()
      Simplify the prototype for iterator initializers.
      test: Randomize some tests if PIXMAN_RANDOMIZE_TESTS is set
      test: Fix infinite loop in composite
This commit is contained in:
Jeff Muizelaar 2011-03-28 13:52:09 -04:00
Родитель e6ae18a722
Коммит f5fc4f988f
38 изменённых файлов: 5558 добавлений и 2439 удалений

Просмотреть файл

@ -73,8 +73,15 @@ USE_MMX=1
endif
USE_SSE2=1
MMX_CFLAGS=
ifneq (,$(filter 1400 1500, $(_MSC_VER)))
# MSVC 2005 and 2008 generate code that breaks alignment
# restrictions in debug mode so always optimize.
# See bug 640250 for more info.
SSE2_CFLAGS=-O2
else
SSE2_CFLAGS=
endif
endif
ifeq (arm,$(findstring arm,$(OS_TEST)))
USE_ARM_SIMD_MSVC=1
endif

Просмотреть файл

@ -210,6 +210,46 @@ fetch_scanline_b8g8r8x8 (pixman_image_t *image,
}
}
static void
fetch_scanline_r8g8b8a8 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask)
{
const uint32_t *bits = image->bits.bits + y * image->bits.rowstride;
const uint32_t *pixel = (uint32_t *)bits + x;
const uint32_t *end = pixel + width;
while (pixel < end)
{
uint32_t p = READ (image, pixel++);
*buffer++ = (((p & 0x000000ff) << 24) | (p >> 8));
}
}
static void
fetch_scanline_r8g8b8x8 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask)
{
const uint32_t *bits = image->bits.bits + y * image->bits.rowstride;
const uint32_t *pixel = (uint32_t *)bits + x;
const uint32_t *end = pixel + width;
while (pixel < end)
{
uint32_t p = READ (image, pixel++);
*buffer++ = (0xff000000 | (p >> 8));
}
}
static void
fetch_scanline_x14r6g6b6 (pixman_image_t *image,
int x,
@ -1291,6 +1331,28 @@ fetch_pixel_b8g8r8x8 (bits_image_t *image,
(pixel & 0x0000ff00) << 8);
}
static uint32_t
fetch_pixel_r8g8b8a8 (bits_image_t *image,
int offset,
int line)
{
uint32_t *bits = image->bits + line * image->rowstride;
uint32_t pixel = READ (image, (uint32_t *)bits + offset);
return (((pixel & 0x000000ff) << 24) | (pixel >> 8));
}
static uint32_t
fetch_pixel_r8g8b8x8 (bits_image_t *image,
int offset,
int line)
{
uint32_t *bits = image->bits + line * image->rowstride;
uint32_t pixel = READ (image, (uint32_t *)bits + offset);
return (0xff000000 | (pixel >> 8));
}
static uint32_t
fetch_pixel_x14r6g6b6 (bits_image_t *image,
int offset,
@ -2027,6 +2089,39 @@ store_scanline_b8g8r8x8 (bits_image_t * image,
}
}
static void
store_scanline_r8g8b8a8 (bits_image_t * image,
int x,
int y,
int width,
const uint32_t *values)
{
uint32_t *bits = image->bits + image->rowstride * y;
uint32_t *pixel = (uint32_t *)bits + x;
int i;
for (i = 0; i < width; ++i)
{
WRITE (image, pixel++,
((values[i] >> 24) & 0x000000ff) | (values[i] << 8));
}
}
static void
store_scanline_r8g8b8x8 (bits_image_t * image,
int x,
int y,
int width,
const uint32_t *values)
{
uint32_t *bits = image->bits + image->rowstride * y;
uint32_t *pixel = (uint32_t *)bits + x;
int i;
for (i = 0; i < width; ++i)
WRITE (image, pixel++, (values[i] << 8));
}
static void
store_scanline_x14r6g6b6 (bits_image_t * image,
int x,
@ -2845,6 +2940,8 @@ static const format_info_t accessors[] =
FORMAT_INFO (x8b8g8r8),
FORMAT_INFO (b8g8r8a8),
FORMAT_INFO (b8g8r8x8),
FORMAT_INFO (r8g8b8a8),
FORMAT_INFO (r8g8b8x8),
FORMAT_INFO (x14r6g6b6),
/* 24bpp formats */

Просмотреть файл

@ -26,6 +26,8 @@
#ifndef PIXMAN_ARM_COMMON_H
#define PIXMAN_ARM_COMMON_H
#include "pixman-fast-path.h"
/* Define some macros which can expand into proxy functions between
* ARM assembly optimized functions and the rest of pixman fast path API.
*
@ -45,6 +47,9 @@
* or mask), the corresponding stride argument is unused.
*/
#define SKIP_ZERO_SRC 1
#define SKIP_ZERO_MASK 2
#define PIXMAN_ARM_BIND_FAST_PATH_SRC_DST(cputype, name, \
src_type, src_cnt, \
dst_type, dst_cnt) \
@ -85,7 +90,7 @@ cputype##_composite_##name (pixman_implementation_t *imp, \
src_line, src_stride); \
}
#define PIXMAN_ARM_BIND_FAST_PATH_N_DST(cputype, name, \
#define PIXMAN_ARM_BIND_FAST_PATH_N_DST(flags, cputype, name, \
dst_type, dst_cnt) \
void \
pixman_composite_##name##_asm_##cputype (int32_t w, \
@ -113,9 +118,10 @@ cputype##_composite_##name (pixman_implementation_t *imp, \
int32_t dst_stride; \
uint32_t src; \
\
src = _pixman_image_get_solid (src_image, dst_image->bits.format); \
src = _pixman_image_get_solid ( \
imp, src_image, dst_image->bits.format); \
\
if (src == 0) \
if ((flags & SKIP_ZERO_SRC) && src == 0) \
return; \
\
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, dst_type, \
@ -126,7 +132,7 @@ cputype##_composite_##name (pixman_implementation_t *imp, \
src); \
}
#define PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST(cputype, name, \
#define PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST(flags, cputype, name, \
mask_type, mask_cnt, \
dst_type, dst_cnt) \
void \
@ -159,9 +165,10 @@ cputype##_composite_##name (pixman_implementation_t *imp, \
int32_t dst_stride, mask_stride; \
uint32_t src; \
\
src = _pixman_image_get_solid (src_image, dst_image->bits.format); \
src = _pixman_image_get_solid ( \
imp, src_image, dst_image->bits.format); \
\
if (src == 0) \
if ((flags & SKIP_ZERO_SRC) && src == 0) \
return; \
\
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, dst_type, \
@ -175,7 +182,7 @@ cputype##_composite_##name (pixman_implementation_t *imp, \
mask_line, mask_stride); \
}
#define PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST(cputype, name, \
#define PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST(flags, cputype, name, \
src_type, src_cnt, \
dst_type, dst_cnt) \
void \
@ -207,9 +214,10 @@ cputype##_composite_##name (pixman_implementation_t *imp, \
int32_t dst_stride, src_stride; \
uint32_t mask; \
\
mask = _pixman_image_get_solid (mask_image, dst_image->bits.format);\
mask = _pixman_image_get_solid ( \
imp, mask_image, dst_image->bits.format); \
\
if (mask == 0) \
if ((flags & SKIP_ZERO_MASK) && mask == 0) \
return; \
\
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, dst_type, \
@ -270,4 +278,132 @@ cputype##_composite_##name (pixman_implementation_t *imp, \
mask_line, mask_stride); \
}
#define PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_DST(cputype, name, op, \
src_type, dst_type) \
void \
pixman_scaled_nearest_scanline_##name##_##op##_asm_##cputype ( \
int32_t w, \
dst_type * dst, \
const src_type * src, \
pixman_fixed_t vx, \
pixman_fixed_t unit_x); \
\
static force_inline void \
scaled_nearest_scanline_##cputype##_##name##_##op (dst_type * pd, \
const src_type * ps, \
int32_t w, \
pixman_fixed_t vx, \
pixman_fixed_t unit_x, \
pixman_fixed_t max_vx, \
pixman_bool_t zero_src) \
{ \
pixman_scaled_nearest_scanline_##name##_##op##_asm_##cputype (w, pd, ps, \
vx, unit_x);\
} \
\
FAST_NEAREST_MAINLOOP (cputype##_##name##_cover_##op, \
scaled_nearest_scanline_##cputype##_##name##_##op, \
src_type, dst_type, COVER) \
FAST_NEAREST_MAINLOOP (cputype##_##name##_none_##op, \
scaled_nearest_scanline_##cputype##_##name##_##op, \
src_type, dst_type, NONE) \
FAST_NEAREST_MAINLOOP (cputype##_##name##_pad_##op, \
scaled_nearest_scanline_##cputype##_##name##_##op, \
src_type, dst_type, PAD)
/* Provide entries for the fast path table */
#define PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH(op,s,d,func) \
SIMPLE_NEAREST_FAST_PATH_COVER (op,s,d,func), \
SIMPLE_NEAREST_FAST_PATH_NONE (op,s,d,func), \
SIMPLE_NEAREST_FAST_PATH_PAD (op,s,d,func)
#define PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_A8_DST(flags, cputype, name, op, \
src_type, dst_type) \
void \
pixman_scaled_nearest_scanline_##name##_##op##_asm_##cputype ( \
int32_t w, \
dst_type * dst, \
const src_type * src, \
pixman_fixed_t vx, \
pixman_fixed_t unit_x, \
const uint8_t * mask); \
\
static force_inline void \
scaled_nearest_scanline_##cputype##_##name##_##op (const uint8_t * mask, \
dst_type * pd, \
const src_type * ps, \
int32_t w, \
pixman_fixed_t vx, \
pixman_fixed_t unit_x, \
pixman_fixed_t max_vx, \
pixman_bool_t zero_src) \
{ \
if ((flags & SKIP_ZERO_SRC) && zero_src) \
return; \
pixman_scaled_nearest_scanline_##name##_##op##_asm_##cputype (w, pd, ps, \
vx, unit_x, \
mask); \
} \
\
FAST_NEAREST_MAINLOOP_COMMON (cputype##_##name##_cover_##op, \
scaled_nearest_scanline_##cputype##_##name##_##op,\
src_type, uint8_t, dst_type, COVER, TRUE, FALSE)\
FAST_NEAREST_MAINLOOP_COMMON (cputype##_##name##_none_##op, \
scaled_nearest_scanline_##cputype##_##name##_##op,\
src_type, uint8_t, dst_type, NONE, TRUE, FALSE) \
FAST_NEAREST_MAINLOOP_COMMON (cputype##_##name##_pad_##op, \
scaled_nearest_scanline_##cputype##_##name##_##op,\
src_type, uint8_t, dst_type, PAD, TRUE, FALSE)
/* Provide entries for the fast path table */
#define PIXMAN_ARM_SIMPLE_NEAREST_A8_MASK_FAST_PATH(op,s,d,func) \
SIMPLE_NEAREST_A8_MASK_FAST_PATH_COVER (op,s,d,func), \
SIMPLE_NEAREST_A8_MASK_FAST_PATH_NONE (op,s,d,func), \
SIMPLE_NEAREST_A8_MASK_FAST_PATH_PAD (op,s,d,func)
/*****************************************************************************/
#define PIXMAN_ARM_BIND_SCALED_BILINEAR_SRC_DST(flags, cputype, name, op, \
src_type, dst_type) \
void \
pixman_scaled_bilinear_scanline_##name##_##op##_asm_##cputype ( \
dst_type * dst, \
const src_type * top, \
const src_type * bottom, \
int wt, \
int wb, \
pixman_fixed_t x, \
pixman_fixed_t ux, \
int width); \
\
static force_inline void \
scaled_bilinear_scanline_##cputype##_##name##_##op ( \
dst_type * dst, \
const uint32_t * mask, \
const src_type * src_top, \
const src_type * src_bottom, \
int32_t w, \
int wt, \
int wb, \
pixman_fixed_t vx, \
pixman_fixed_t unit_x, \
pixman_fixed_t max_vx, \
pixman_bool_t zero_src) \
{ \
if ((flags & SKIP_ZERO_SRC) && zero_src) \
return; \
pixman_scaled_bilinear_scanline_##name##_##op##_asm_##cputype ( \
dst, src_top, src_bottom, wt, wb, vx, unit_x, w); \
} \
\
FAST_BILINEAR_MAINLOOP_COMMON (cputype##_##name##_cover_##op, \
scaled_bilinear_scanline_##cputype##_##name##_##op, \
src_type, uint32_t, dst_type, COVER, FALSE, FALSE) \
FAST_BILINEAR_MAINLOOP_COMMON (cputype##_##name##_none_##op, \
scaled_bilinear_scanline_##cputype##_##name##_##op, \
src_type, uint32_t, dst_type, NONE, FALSE, FALSE) \
FAST_BILINEAR_MAINLOOP_COMMON (cputype##_##name##_pad_##op, \
scaled_bilinear_scanline_##cputype##_##name##_##op, \
src_type, uint32_t, dst_type, PAD, FALSE, FALSE)
#endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -205,6 +205,121 @@
.endif
.endm
/*
* Pixel fetcher for nearest scaling (needs TMP1, TMP2, VX, UNIT_X register
* aliases to be defined)
*/
.macro pixld1_s elem_size, reg1, mem_operand
.if elem_size == 16
mov TMP1, VX, asr #16
add VX, VX, UNIT_X
add TMP1, mem_operand, TMP1, asl #1
mov TMP2, VX, asr #16
add VX, VX, UNIT_X
add TMP2, mem_operand, TMP2, asl #1
vld1.16 {d&reg1&[0]}, [TMP1, :16]
mov TMP1, VX, asr #16
add VX, VX, UNIT_X
add TMP1, mem_operand, TMP1, asl #1
vld1.16 {d&reg1&[1]}, [TMP2, :16]
mov TMP2, VX, asr #16
add VX, VX, UNIT_X
add TMP2, mem_operand, TMP2, asl #1
vld1.16 {d&reg1&[2]}, [TMP1, :16]
vld1.16 {d&reg1&[3]}, [TMP2, :16]
.elseif elem_size == 32
mov TMP1, VX, asr #16
add VX, VX, UNIT_X
add TMP1, mem_operand, TMP1, asl #2
mov TMP2, VX, asr #16
add VX, VX, UNIT_X
add TMP2, mem_operand, TMP2, asl #2
vld1.32 {d&reg1&[0]}, [TMP1, :32]
vld1.32 {d&reg1&[1]}, [TMP2, :32]
.else
.error "unsupported"
.endif
.endm
.macro pixld2_s elem_size, reg1, reg2, mem_operand
.if elem_size == 32
mov TMP1, VX, asr #16
add VX, VX, UNIT_X, asl #1
add TMP1, mem_operand, TMP1, asl #2
mov TMP2, VX, asr #16
sub VX, VX, UNIT_X
add TMP2, mem_operand, TMP2, asl #2
vld1.32 {d&reg1&[0]}, [TMP1, :32]
mov TMP1, VX, asr #16
add VX, VX, UNIT_X, asl #1
add TMP1, mem_operand, TMP1, asl #2
vld1.32 {d&reg2&[0]}, [TMP2, :32]
mov TMP2, VX, asr #16
add VX, VX, UNIT_X
add TMP2, mem_operand, TMP2, asl #2
vld1.32 {d&reg1&[1]}, [TMP1, :32]
vld1.32 {d&reg2&[1]}, [TMP2, :32]
.else
pixld1_s elem_size, reg1, mem_operand
pixld1_s elem_size, reg2, mem_operand
.endif
.endm
.macro pixld0_s elem_size, reg1, idx, mem_operand
.if elem_size == 16
mov TMP1, VX, asr #16
add VX, VX, UNIT_X
add TMP1, mem_operand, TMP1, asl #1
vld1.16 {d&reg1&[idx]}, [TMP1, :16]
.elseif elem_size == 32
mov TMP1, VX, asr #16
add VX, VX, UNIT_X
add TMP1, mem_operand, TMP1, asl #2
vld1.32 {d&reg1&[idx]}, [TMP1, :32]
.endif
.endm
.macro pixld_s_internal numbytes, elem_size, basereg, mem_operand
.if numbytes == 32
pixld2_s elem_size, %(basereg+4), %(basereg+5), mem_operand
pixld2_s elem_size, %(basereg+6), %(basereg+7), mem_operand
pixdeinterleave elem_size, %(basereg+4)
.elseif numbytes == 16
pixld2_s elem_size, %(basereg+2), %(basereg+3), mem_operand
.elseif numbytes == 8
pixld1_s elem_size, %(basereg+1), mem_operand
.elseif numbytes == 4
.if elem_size == 32
pixld0_s elem_size, %(basereg+0), 1, mem_operand
.elseif elem_size == 16
pixld0_s elem_size, %(basereg+0), 2, mem_operand
pixld0_s elem_size, %(basereg+0), 3, mem_operand
.else
pixld0_s elem_size, %(basereg+0), 4, mem_operand
pixld0_s elem_size, %(basereg+0), 5, mem_operand
pixld0_s elem_size, %(basereg+0), 6, mem_operand
pixld0_s elem_size, %(basereg+0), 7, mem_operand
.endif
.elseif numbytes == 2
.if elem_size == 16
pixld0_s elem_size, %(basereg+0), 1, mem_operand
.else
pixld0_s elem_size, %(basereg+0), 2, mem_operand
pixld0_s elem_size, %(basereg+0), 3, mem_operand
.endif
.elseif numbytes == 1
pixld0_s elem_size, %(basereg+0), 1, mem_operand
.else
.error "unsupported size: numbytes"
.endif
.endm
.macro pixld_s numpix, bpp, basereg, mem_operand
.if bpp > 0
pixld_s_internal %(numpix * bpp / 8), %(bpp), basereg, mem_operand
.endif
.endm
.macro vuzp8 reg1, reg2
vuzp.8 d&reg1, d&reg2
.endm
@ -316,6 +431,11 @@
.endif
.endm
.macro fetch_mask_pixblock
pixld pixblock_size, mask_bpp, \
(mask_basereg - pixblock_size * mask_bpp / 64), MASK
.endm
/*
* Macro which is used to process leading pixels until destination
* pointer is properly aligned (at 16 bytes boundary). When destination
@ -335,7 +455,7 @@ local skip1
tst DST_R, #lowbit
beq 1f
.endif
pixld (lowbit * 8 / dst_w_bpp), src_bpp, src_basereg, SRC
pixld_src (lowbit * 8 / dst_w_bpp), src_bpp, src_basereg, SRC
pixld (lowbit * 8 / dst_w_bpp), mask_bpp, mask_basereg, MASK
.if dst_r_bpp > 0
pixld_a (lowbit * 8 / dst_r_bpp), dst_r_bpp, dst_r_basereg, DST_R
@ -397,7 +517,7 @@ local skip1
.if pixblock_size > chunk_size
tst W, #chunk_size
beq 1f
pixld chunk_size, src_bpp, src_basereg, SRC
pixld_src chunk_size, src_bpp, src_basereg, SRC
pixld chunk_size, mask_bpp, mask_basereg, MASK
.if dst_aligned_flag != 0
pixld_a chunk_size, dst_r_bpp, dst_r_basereg, DST_R
@ -531,6 +651,13 @@ fname:
.set src_basereg, src_basereg_
.set mask_basereg, mask_basereg_
.macro pixld_src x:vararg
pixld x
.endm
.macro fetch_src_pixblock
pixld_src pixblock_size, src_bpp, \
(src_basereg - pixblock_size * src_bpp / 64), SRC
.endm
/*
* Assign symbolic names to registers
*/
@ -696,8 +823,7 @@ fname:
/* Implement "head (tail_head) ... (tail_head) tail" loop pattern */
pixld_a pixblock_size, dst_r_bpp, \
(dst_r_basereg - pixblock_size * dst_r_bpp / 64), DST_R
pixld pixblock_size, src_bpp, \
(src_basereg - pixblock_size * src_bpp / 64), SRC
fetch_src_pixblock
pixld pixblock_size, mask_bpp, \
(mask_basereg - pixblock_size * mask_bpp / 64), MASK
PF add PF_X, PF_X, #pixblock_size
@ -739,8 +865,7 @@ fname:
beq 1f
pixld pixblock_size, dst_r_bpp, \
(dst_r_basereg - pixblock_size * dst_r_bpp / 64), DST_R
pixld pixblock_size, src_bpp, \
(src_basereg - pixblock_size * src_bpp / 64), SRC
fetch_src_pixblock
pixld pixblock_size, mask_bpp, \
(mask_basereg - pixblock_size * mask_bpp / 64), MASK
process_pixblock_head
@ -761,6 +886,9 @@ fname:
cleanup
pop {r4-r12, pc} /* exit */
.purgem fetch_src_pixblock
.purgem pixld_src
.unreq SRC
.unreq MASK
.unreq DST_R
@ -784,7 +912,8 @@ fname:
* A simplified variant of function generation template for a single
* scanline processing (for implementing pixman combine functions)
*/
.macro generate_composite_function_single_scanline fname, \
.macro generate_composite_function_scanline use_nearest_scaling, \
fname, \
src_bpp_, \
mask_bpp_, \
dst_w_bpp_, \
@ -821,15 +950,45 @@ fname:
.set dst_r_basereg, dst_r_basereg_
.set src_basereg, src_basereg_
.set mask_basereg, mask_basereg_
/*
* Assign symbolic names to registers
*/
.if use_nearest_scaling != 0
/*
* Assign symbolic names to registers for nearest scaling
*/
W .req r0
DST_W .req r1
SRC .req r2
VX .req r3
UNIT_X .req ip
MASK .req lr
TMP1 .req r4
TMP2 .req r5
DST_R .req r6
.macro pixld_src x:vararg
pixld_s x
.endm
ldr UNIT_X, [sp]
push {r4-r6, lr}
.if mask_bpp != 0
ldr MASK, [sp, #(16 + 4)]
.endif
.else
/*
* Assign symbolic names to registers
*/
W .req r0 /* width (is updated during processing) */
DST_W .req r1 /* destination buffer pointer for writes */
SRC .req r2 /* source buffer pointer */
DST_R .req ip /* destination buffer pointer for reads */
MASK .req r3 /* mask pointer */
.macro pixld_src x:vararg
pixld x
.endm
.endif
.if (((flags) & FLAG_DST_READWRITE) != 0)
.set dst_r_bpp, dst_w_bpp
.else
@ -841,6 +1000,11 @@ fname:
.set DEINTERLEAVE_32BPP_ENABLED, 0
.endif
.macro fetch_src_pixblock
pixld_src pixblock_size, src_bpp, \
(src_basereg - pixblock_size * src_bpp / 64), SRC
.endm
init
mov DST_R, DST_W
@ -857,8 +1021,7 @@ fname:
/* Implement "head (tail_head) ... (tail_head) tail" loop pattern */
pixld_a pixblock_size, dst_r_bpp, \
(dst_r_basereg - pixblock_size * dst_r_bpp / 64), DST_R
pixld pixblock_size, src_bpp, \
(src_basereg - pixblock_size * src_bpp / 64), SRC
fetch_src_pixblock
pixld pixblock_size, mask_bpp, \
(mask_basereg - pixblock_size * mask_bpp / 64), MASK
process_pixblock_head
@ -880,7 +1043,11 @@ fname:
process_pixblock_tail_head
cleanup
bx lr /* exit */
.if use_nearest_scaling != 0
pop {r4-r6, pc} /* exit */
.else
bx lr /* exit */
.endif
8:
/* Process the remaining trailing pixels in the scanline (dst unaligned) */
process_trailing_pixels 0, 0, \
@ -889,6 +1056,21 @@ fname:
process_pixblock_tail_head
cleanup
.if use_nearest_scaling != 0
pop {r4-r6, pc} /* exit */
.unreq DST_R
.unreq SRC
.unreq W
.unreq VX
.unreq UNIT_X
.unreq TMP1
.unreq TMP2
.unreq DST_W
.unreq MASK
.else
bx lr /* exit */
.unreq SRC
@ -896,9 +1078,22 @@ fname:
.unreq DST_R
.unreq DST_W
.unreq W
.endif
.purgem fetch_src_pixblock
.purgem pixld_src
.endfunc
.endm
.macro generate_composite_function_single_scanline x:vararg
generate_composite_function_scanline 0, x
.endm
.macro generate_composite_function_nearest_scanline x:vararg
generate_composite_function_scanline 1, x
.endm
/* Default prologue/epilogue, nothing special needs to be done */
.macro default_init
@ -963,3 +1158,20 @@ fname:
vsri.u16 out, tmp1, #5
vsri.u16 out, tmp2, #11
.endm
/*
* Conversion of four r5g6b5 pixels (in) to four x8r8g8b8 pixels
* returned in (out0, out1) registers pair. Requires one temporary
* 64-bit register (tmp). 'out1' and 'in' may overlap, the original
* value from 'in' is lost
*/
.macro convert_four_0565_to_x888_packed in, out0, out1, tmp
vshl.u16 out0, in, #5 /* G top 6 bits */
vshl.u16 tmp, in, #11 /* B top 5 bits */
vsri.u16 in, in, #5 /* R is ready in top bits */
vsri.u16 out0, out0, #6 /* G is ready in top bits */
vsri.u16 tmp, tmp, #5 /* B is ready in top bits */
vshr.u16 out1, in, #8 /* R is in place */
vsri.u16 out0, tmp, #8 /* G & B is in place */
vzip.u16 out0, out1 /* everything is in place */
.endm

Просмотреть файл

@ -52,6 +52,8 @@ PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (neon, src_0888_0565_rev,
uint8_t, 3, uint16_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (neon, src_pixbuf_8888,
uint32_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (neon, src_rpixbuf_8888,
uint32_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (neon, add_8_8,
uint8_t, 1, uint8_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (neon, add_8888_8888,
@ -63,29 +65,43 @@ PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (neon, over_8888_8888,
PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (neon, out_reverse_8_0565,
uint8_t, 1, uint16_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_DST (neon, over_n_0565,
PIXMAN_ARM_BIND_FAST_PATH_N_DST (SKIP_ZERO_SRC, neon, over_n_0565,
uint16_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_DST (neon, over_n_8888,
PIXMAN_ARM_BIND_FAST_PATH_N_DST (SKIP_ZERO_SRC, neon, over_n_8888,
uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_DST (neon, over_reverse_n_8888,
PIXMAN_ARM_BIND_FAST_PATH_N_DST (SKIP_ZERO_SRC, neon, over_reverse_n_8888,
uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_DST (0, neon, in_n_8,
uint8_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (neon, over_n_8_0565,
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (SKIP_ZERO_SRC, neon, over_n_8_0565,
uint8_t, 1, uint16_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (neon, over_n_8_8888,
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (SKIP_ZERO_SRC, neon, over_n_8_8888,
uint8_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (neon, over_n_8888_8888_ca,
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (SKIP_ZERO_SRC, neon, over_n_8888_8888_ca,
uint32_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (neon, add_n_8_8,
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (SKIP_ZERO_SRC, neon, over_n_8_8,
uint8_t, 1, uint8_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (SKIP_ZERO_SRC, neon, add_n_8_8,
uint8_t, 1, uint8_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (SKIP_ZERO_SRC, neon, add_n_8_8888,
uint8_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST (neon, over_8888_n_8888,
PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST (SKIP_ZERO_MASK, neon, over_8888_n_8888,
uint32_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST (SKIP_ZERO_MASK, neon, over_8888_n_0565,
uint32_t, 1, uint16_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST (SKIP_ZERO_MASK, neon, over_0565_n_0565,
uint16_t, 1, uint16_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST (SKIP_ZERO_MASK, neon, add_8888_n_8888,
uint32_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_MASK_DST (neon, add_8_8_8,
uint8_t, 1, uint8_t, 1, uint8_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_MASK_DST (neon, add_0565_8_0565,
uint16_t, 1, uint8_t, 1, uint16_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_MASK_DST (neon, add_8888_8_8888,
uint32_t, 1, uint8_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_MASK_DST (neon, add_8888_8888_8888,
uint32_t, 1, uint32_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_MASK_DST (neon, over_8888_8_8888,
@ -97,6 +113,29 @@ PIXMAN_ARM_BIND_FAST_PATH_SRC_MASK_DST (neon, over_8888_8_0565,
PIXMAN_ARM_BIND_FAST_PATH_SRC_MASK_DST (neon, over_0565_8_0565,
uint16_t, 1, uint8_t, 1, uint16_t, 1)
PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_DST (neon, 8888_8888, OVER,
uint32_t, uint32_t)
PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_DST (neon, 8888_0565, OVER,
uint32_t, uint16_t)
PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_DST (neon, 8888_0565, SRC,
uint32_t, uint16_t)
PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_DST (neon, 0565_8888, SRC,
uint16_t, uint32_t)
PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_A8_DST (SKIP_ZERO_SRC, neon, 8888_8_0565,
OVER, uint32_t, uint16_t)
PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_A8_DST (SKIP_ZERO_SRC, neon, 0565_8_0565,
OVER, uint16_t, uint16_t)
PIXMAN_ARM_BIND_SCALED_BILINEAR_SRC_DST (0, neon, 8888_8888, SRC,
uint32_t, uint32_t)
PIXMAN_ARM_BIND_SCALED_BILINEAR_SRC_DST (0, neon, 8888_0565, SRC,
uint32_t, uint16_t)
PIXMAN_ARM_BIND_SCALED_BILINEAR_SRC_DST (0, neon, 0565_x888, SRC,
uint16_t, uint32_t)
PIXMAN_ARM_BIND_SCALED_BILINEAR_SRC_DST (0, neon, 0565_0565, SRC,
uint16_t, uint16_t)
void
pixman_composite_src_n_8_asm_neon (int32_t w,
int32_t h,
@ -226,6 +265,10 @@ static const pixman_fast_path_t arm_neon_fast_paths[] =
PIXMAN_STD_FAST_PATH (SRC, b8g8r8, null, x8r8g8b8, neon_composite_src_0888_8888_rev),
PIXMAN_STD_FAST_PATH (SRC, b8g8r8, null, r5g6b5, neon_composite_src_0888_0565_rev),
PIXMAN_STD_FAST_PATH (SRC, pixbuf, pixbuf, a8r8g8b8, neon_composite_src_pixbuf_8888),
PIXMAN_STD_FAST_PATH (SRC, pixbuf, pixbuf, a8b8g8r8, neon_composite_src_rpixbuf_8888),
PIXMAN_STD_FAST_PATH (SRC, rpixbuf, rpixbuf, a8r8g8b8, neon_composite_src_rpixbuf_8888),
PIXMAN_STD_FAST_PATH (SRC, rpixbuf, rpixbuf, a8b8g8r8, neon_composite_src_pixbuf_8888),
PIXMAN_STD_FAST_PATH (OVER, solid, a8, a8, neon_composite_over_n_8_8),
PIXMAN_STD_FAST_PATH (OVER, solid, a8, r5g6b5, neon_composite_over_n_8_0565),
PIXMAN_STD_FAST_PATH (OVER, solid, a8, b5g6r5, neon_composite_over_n_8_0565),
PIXMAN_STD_FAST_PATH (OVER, solid, a8, a8r8g8b8, neon_composite_over_n_8_8888),
@ -241,6 +284,10 @@ static const pixman_fast_path_t arm_neon_fast_paths[] =
PIXMAN_STD_FAST_PATH_CA (OVER, solid, a8b8g8r8, x8b8g8r8, neon_composite_over_n_8888_8888_ca),
PIXMAN_STD_FAST_PATH (OVER, a8r8g8b8, solid, a8r8g8b8, neon_composite_over_8888_n_8888),
PIXMAN_STD_FAST_PATH (OVER, a8r8g8b8, solid, x8r8g8b8, neon_composite_over_8888_n_8888),
PIXMAN_STD_FAST_PATH (OVER, a8r8g8b8, solid, r5g6b5, neon_composite_over_8888_n_0565),
PIXMAN_STD_FAST_PATH (OVER, a8b8g8r8, solid, b5g6r5, neon_composite_over_8888_n_0565),
PIXMAN_STD_FAST_PATH (OVER, r5g6b5, solid, r5g6b5, neon_composite_over_0565_n_0565),
PIXMAN_STD_FAST_PATH (OVER, b5g6r5, solid, b5g6r5, neon_composite_over_0565_n_0565),
PIXMAN_STD_FAST_PATH (OVER, a8r8g8b8, a8, a8r8g8b8, neon_composite_over_8888_8_8888),
PIXMAN_STD_FAST_PATH (OVER, a8r8g8b8, a8, x8r8g8b8, neon_composite_over_8888_8_8888),
PIXMAN_STD_FAST_PATH (OVER, a8b8g8r8, a8, a8b8g8r8, neon_composite_over_8888_8_8888),
@ -259,18 +306,62 @@ static const pixman_fast_path_t arm_neon_fast_paths[] =
PIXMAN_STD_FAST_PATH (OVER, x8r8g8b8, null, a8r8g8b8, neon_composite_src_x888_8888),
PIXMAN_STD_FAST_PATH (OVER, x8b8g8r8, null, a8b8g8r8, neon_composite_src_x888_8888),
PIXMAN_STD_FAST_PATH (ADD, solid, a8, a8, neon_composite_add_n_8_8),
PIXMAN_STD_FAST_PATH (ADD, solid, a8, a8r8g8b8, neon_composite_add_n_8_8888),
PIXMAN_STD_FAST_PATH (ADD, solid, a8, a8b8g8r8, neon_composite_add_n_8_8888),
PIXMAN_STD_FAST_PATH (ADD, a8, a8, a8, neon_composite_add_8_8_8),
PIXMAN_STD_FAST_PATH (ADD, r5g6b5, a8, r5g6b5, neon_composite_add_0565_8_0565),
PIXMAN_STD_FAST_PATH (ADD, b5g6r5, a8, b5g6r5, neon_composite_add_0565_8_0565),
PIXMAN_STD_FAST_PATH (ADD, a8r8g8b8, a8, a8r8g8b8, neon_composite_add_8888_8_8888),
PIXMAN_STD_FAST_PATH (ADD, a8b8g8r8, a8, a8b8g8r8, neon_composite_add_8888_8_8888),
PIXMAN_STD_FAST_PATH (ADD, a8r8g8b8, a8r8g8b8, a8r8g8b8, neon_composite_add_8888_8888_8888),
PIXMAN_STD_FAST_PATH (ADD, a8r8g8b8, solid, a8r8g8b8, neon_composite_add_8888_n_8888),
PIXMAN_STD_FAST_PATH (ADD, a8b8g8r8, solid, a8b8g8r8, neon_composite_add_8888_n_8888),
PIXMAN_STD_FAST_PATH (ADD, a8, null, a8, neon_composite_add_8_8),
PIXMAN_STD_FAST_PATH (ADD, a8r8g8b8, null, a8r8g8b8, neon_composite_add_8888_8888),
PIXMAN_STD_FAST_PATH (ADD, a8b8g8r8, null, a8b8g8r8, neon_composite_add_8888_8888),
PIXMAN_STD_FAST_PATH (IN, solid, null, a8, neon_composite_in_n_8),
PIXMAN_STD_FAST_PATH (OVER_REVERSE, solid, null, a8r8g8b8, neon_composite_over_reverse_n_8888),
PIXMAN_STD_FAST_PATH (OVER_REVERSE, solid, null, a8b8g8r8, neon_composite_over_reverse_n_8888),
PIXMAN_STD_FAST_PATH (OUT_REVERSE, a8, null, r5g6b5, neon_composite_out_reverse_8_0565),
PIXMAN_STD_FAST_PATH (OUT_REVERSE, a8, null, b5g6r5, neon_composite_out_reverse_8_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (OVER, a8r8g8b8, a8r8g8b8, neon_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (OVER, a8b8g8r8, a8b8g8r8, neon_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (OVER, a8r8g8b8, x8r8g8b8, neon_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (OVER, a8b8g8r8, x8b8g8r8, neon_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (OVER, a8r8g8b8, r5g6b5, neon_8888_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (OVER, a8b8g8r8, b5g6r5, neon_8888_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, a8r8g8b8, r5g6b5, neon_8888_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, x8r8g8b8, r5g6b5, neon_8888_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, a8b8g8r8, b5g6r5, neon_8888_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, x8b8g8r8, b5g6r5, neon_8888_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, b5g6r5, x8b8g8r8, neon_0565_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, r5g6b5, x8r8g8b8, neon_0565_8888),
/* Note: NONE repeat is not supported yet */
SIMPLE_NEAREST_FAST_PATH_COVER (SRC, r5g6b5, a8r8g8b8, neon_0565_8888),
SIMPLE_NEAREST_FAST_PATH_COVER (SRC, b5g6r5, a8b8g8r8, neon_0565_8888),
SIMPLE_NEAREST_FAST_PATH_PAD (SRC, r5g6b5, a8r8g8b8, neon_0565_8888),
SIMPLE_NEAREST_FAST_PATH_PAD (SRC, b5g6r5, a8b8g8r8, neon_0565_8888),
PIXMAN_ARM_SIMPLE_NEAREST_A8_MASK_FAST_PATH (OVER, a8r8g8b8, r5g6b5, neon_8888_8_0565),
PIXMAN_ARM_SIMPLE_NEAREST_A8_MASK_FAST_PATH (OVER, a8b8g8r8, b5g6r5, neon_8888_8_0565),
PIXMAN_ARM_SIMPLE_NEAREST_A8_MASK_FAST_PATH (OVER, r5g6b5, r5g6b5, neon_0565_8_0565),
PIXMAN_ARM_SIMPLE_NEAREST_A8_MASK_FAST_PATH (OVER, b5g6r5, b5g6r5, neon_0565_8_0565),
SIMPLE_BILINEAR_FAST_PATH (SRC, a8r8g8b8, a8r8g8b8, neon_8888_8888),
SIMPLE_BILINEAR_FAST_PATH (SRC, a8r8g8b8, x8r8g8b8, neon_8888_8888),
SIMPLE_BILINEAR_FAST_PATH (SRC, x8r8g8b8, x8r8g8b8, neon_8888_8888),
SIMPLE_BILINEAR_FAST_PATH (SRC, a8r8g8b8, r5g6b5, neon_8888_0565),
SIMPLE_BILINEAR_FAST_PATH (SRC, x8r8g8b8, r5g6b5, neon_8888_0565),
SIMPLE_BILINEAR_FAST_PATH (SRC, r5g6b5, x8r8g8b8, neon_0565_x888),
SIMPLE_BILINEAR_FAST_PATH (SRC, r5g6b5, r5g6b5, neon_0565_0565),
{ PIXMAN_OP_NONE },
};
@ -353,13 +444,8 @@ BIND_COMBINE_U (add)
BIND_COMBINE_U (out_reverse)
pixman_implementation_t *
_pixman_implementation_create_arm_neon (void)
_pixman_implementation_create_arm_neon (pixman_implementation_t *fallback)
{
#ifdef USE_ARM_SIMD
pixman_implementation_t *fallback = _pixman_implementation_create_arm_simd ();
#else
pixman_implementation_t *fallback = _pixman_implementation_create_fast_path ();
#endif
pixman_implementation_t *imp =
_pixman_implementation_create (fallback, arm_neon_fast_paths);

Просмотреть файл

@ -1,5 +1,6 @@
/*
* Copyright © 2008 Mozilla Corporation
* Copyright © 2010 Nokia Corporation
*
* Permission to use, copy, modify, distribute, and sell this software and its
* documentation for any purpose is hereby granted without fee, provided that
@ -328,3 +329,110 @@ pixman_asm_function pixman_composite_over_n_8_8888_asm_armv6
pop {r4, r5, r6, r7, r8, r9, r10, r11}
bx lr
.endfunc
/*
* Note: This code is only using armv5te instructions (not even armv6),
* but is scheduled for ARM Cortex-A8 pipeline. So it might need to
* be split into a few variants, tuned for each microarchitecture.
*
* TODO: In order to get good performance on ARM9/ARM11 cores (which don't
* have efficient write combining), it needs to be changed to use 16-byte
* aligned writes using STM instruction.
*
* Nearest scanline scaler macro template uses the following arguments:
* fname - name of the function to generate
* bpp_shift - (1 << bpp_shift) is the size of pixel in bytes
* t - type suffix for LDR/STR instructions
* prefetch_distance - prefetch in the source image by that many
* pixels ahead
* prefetch_braking_distance - stop prefetching when that many pixels are
* remaining before the end of scanline
*/
.macro generate_nearest_scanline_func fname, bpp_shift, t, \
prefetch_distance, \
prefetch_braking_distance
pixman_asm_function fname
W .req r0
DST .req r1
SRC .req r2
VX .req r3
UNIT_X .req ip
TMP1 .req r4
TMP2 .req r5
VXMASK .req r6
PF_OFFS .req r7
ldr UNIT_X, [sp]
push {r4, r5, r6, r7}
mvn VXMASK, #((1 << bpp_shift) - 1)
/* define helper macro */
.macro scale_2_pixels
ldr&t TMP1, [SRC, TMP1]
and TMP2, VXMASK, VX, lsr #(16 - bpp_shift)
add VX, VX, UNIT_X
str&t TMP1, [DST], #(1 << bpp_shift)
ldr&t TMP2, [SRC, TMP2]
and TMP1, VXMASK, VX, lsr #(16 - bpp_shift)
add VX, VX, UNIT_X
str&t TMP2, [DST], #(1 << bpp_shift)
.endm
/* now do the scaling */
and TMP1, VXMASK, VX, lsr #(16 - bpp_shift)
add VX, VX, UNIT_X
subs W, W, #(8 + prefetch_braking_distance)
blt 2f
/* calculate prefetch offset */
mov PF_OFFS, #prefetch_distance
mla PF_OFFS, UNIT_X, PF_OFFS, VX
1: /* main loop, process 8 pixels per iteration with prefetch */
subs W, W, #8
add PF_OFFS, UNIT_X, lsl #3
scale_2_pixels
scale_2_pixels
scale_2_pixels
scale_2_pixels
pld [SRC, PF_OFFS, lsr #(16 - bpp_shift)]
bge 1b
2:
subs W, W, #(4 - 8 - prefetch_braking_distance)
blt 2f
1: /* process the remaining pixels */
scale_2_pixels
scale_2_pixels
subs W, W, #4
bge 1b
2:
tst W, #2
beq 2f
scale_2_pixels
2:
tst W, #1
ldrne&t TMP1, [SRC, TMP1]
strne&t TMP1, [DST]
/* cleanup helper macro */
.purgem scale_2_pixels
.unreq DST
.unreq SRC
.unreq W
.unreq VX
.unreq UNIT_X
.unreq TMP1
.unreq TMP2
.unreq VXMASK
.unreq PF_OFFS
/* return */
pop {r4, r5, r6, r7}
bx lr
.endfunc
.endm
generate_nearest_scanline_func \
pixman_scaled_nearest_scanline_0565_0565_SRC_asm_armv6, 1, h, 80, 32
generate_nearest_scanline_func \
pixman_scaled_nearest_scanline_8888_8888_SRC_asm_armv6, 2, , 48, 32

Просмотреть файл

@ -29,6 +29,7 @@
#include "pixman-private.h"
#include "pixman-arm-common.h"
#include "pixman-fast-path.h"
#if 0 /* This code was moved to 'pixman-arm-simd-asm.S' */
@ -380,12 +381,17 @@ PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (armv6, add_8_8,
PIXMAN_ARM_BIND_FAST_PATH_SRC_DST (armv6, over_8888_8888,
uint32_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST (armv6, over_8888_n_8888,
PIXMAN_ARM_BIND_FAST_PATH_SRC_N_DST (SKIP_ZERO_MASK, armv6, over_8888_n_8888,
uint32_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (armv6, over_n_8_8888,
PIXMAN_ARM_BIND_FAST_PATH_N_MASK_DST (SKIP_ZERO_SRC, armv6, over_n_8_8888,
uint8_t, 1, uint32_t, 1)
PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_DST (armv6, 0565_0565, SRC,
uint16_t, uint16_t)
PIXMAN_ARM_BIND_SCALED_NEAREST_SRC_DST (armv6, 8888_8888, SRC,
uint32_t, uint32_t)
static const pixman_fast_path_t arm_simd_fast_paths[] =
{
PIXMAN_STD_FAST_PATH (OVER, a8r8g8b8, null, a8r8g8b8, armv6_composite_over_8888_8888),
@ -404,14 +410,23 @@ static const pixman_fast_path_t arm_simd_fast_paths[] =
PIXMAN_STD_FAST_PATH (OVER, solid, a8, a8b8g8r8, armv6_composite_over_n_8_8888),
PIXMAN_STD_FAST_PATH (OVER, solid, a8, x8b8g8r8, armv6_composite_over_n_8_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, r5g6b5, r5g6b5, armv6_0565_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, b5g6r5, b5g6r5, armv6_0565_0565),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, a8r8g8b8, a8r8g8b8, armv6_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, a8r8g8b8, x8r8g8b8, armv6_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, x8r8g8b8, x8r8g8b8, armv6_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, a8b8g8r8, a8b8g8r8, armv6_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, a8b8g8r8, x8b8g8r8, armv6_8888_8888),
PIXMAN_ARM_SIMPLE_NEAREST_FAST_PATH (SRC, x8b8g8r8, x8b8g8r8, armv6_8888_8888),
{ PIXMAN_OP_NONE },
};
pixman_implementation_t *
_pixman_implementation_create_arm_simd (void)
_pixman_implementation_create_arm_simd (pixman_implementation_t *fallback)
{
pixman_implementation_t *general = _pixman_implementation_create_fast_path ();
pixman_implementation_t *imp = _pixman_implementation_create (general, arm_simd_fast_paths);
pixman_implementation_t *imp = _pixman_implementation_create (fallback, arm_simd_fast_paths);
return imp;
}

Просмотреть файл

@ -35,43 +35,41 @@
#include "pixman-private.h"
#include "pixman-combine32.h"
/* Store functions */
void
_pixman_image_store_scanline_32 (bits_image_t * image,
int x,
int y,
int width,
const uint32_t *buffer)
/*
* By default, just evaluate the image at 32bpp and expand. Individual image
* types can plug in a better scanline getter if they want to. For example
* we could produce smoother gradients by evaluating them at higher color
* depth, but that's a project for the future.
*/
static void
_pixman_image_get_scanline_generic_64 (pixman_image_t * image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t * mask)
{
image->store_scanline_32 (image, x, y, width, buffer);
uint32_t *mask8 = NULL;
if (image->common.alpha_map)
/* Contract the mask image, if one exists, so that the 32-bit fetch
* function can use it.
*/
if (mask)
{
x -= image->common.alpha_origin_x;
y -= image->common.alpha_origin_y;
mask8 = pixman_malloc_ab (width, sizeof(uint32_t));
if (!mask8)
return;
image->common.alpha_map->store_scanline_32 (
image->common.alpha_map, x, y, width, buffer);
pixman_contract (mask8, (uint64_t *)mask, width);
}
}
void
_pixman_image_store_scanline_64 (bits_image_t * image,
int x,
int y,
int width,
const uint32_t *buffer)
{
image->store_scanline_64 (image, x, y, width, buffer);
/* Fetch the source image into the first half of buffer. */
image->bits.get_scanline_32 (image, x, y, width, (uint32_t*)buffer, mask8);
if (image->common.alpha_map)
{
x -= image->common.alpha_origin_x;
y -= image->common.alpha_origin_y;
/* Expand from 32bpp to 64bpp in place. */
pixman_expand ((uint64_t *)buffer, buffer, PIXMAN_a8r8g8b8, width);
image->common.alpha_map->store_scanline_64 (
image->common.alpha_map, x, y, width, buffer);
}
free (mask8);
}
/* Fetch functions */
@ -297,6 +295,7 @@ bits_image_fetch_bilinear_no_repeat_8888 (pixman_image_t * ima,
uint32_t *bottom_row;
uint32_t *end;
uint32_t zero[2] = { 0, 0 };
uint32_t one = 1;
int y, y1, y2;
int disty;
int mask_inc;
@ -362,10 +361,8 @@ bits_image_fetch_bilinear_no_repeat_8888 (pixman_image_t * ima,
*/
if (!mask)
{
uint32_t mask_bits = 1;
mask_inc = 0;
mask = &mask_bits;
mask = &one;
}
else
{
@ -907,6 +904,77 @@ bits_image_fetch_bilinear_affine (pixman_image_t * image,
}
}
static force_inline void
bits_image_fetch_nearest_affine (pixman_image_t * image,
int offset,
int line,
int width,
uint32_t * buffer,
const uint32_t * mask,
convert_pixel_t convert_pixel,
pixman_format_code_t format,
pixman_repeat_t repeat_mode)
{
pixman_fixed_t x, y;
pixman_fixed_t ux, uy;
pixman_vector_t v;
bits_image_t *bits = &image->bits;
int i;
/* reference point is the center of the pixel */
v.vector[0] = pixman_int_to_fixed (offset) + pixman_fixed_1 / 2;
v.vector[1] = pixman_int_to_fixed (line) + pixman_fixed_1 / 2;
v.vector[2] = pixman_fixed_1;
if (!pixman_transform_point_3d (image->common.transform, &v))
return;
ux = image->common.transform->matrix[0][0];
uy = image->common.transform->matrix[1][0];
x = v.vector[0];
y = v.vector[1];
for (i = 0; i < width; ++i)
{
int width, height, x0, y0;
const uint8_t *row;
if (mask && !mask[i])
goto next;
width = image->bits.width;
height = image->bits.height;
x0 = pixman_fixed_to_int (x - pixman_fixed_e);
y0 = pixman_fixed_to_int (y - pixman_fixed_e);
if (repeat_mode == PIXMAN_REPEAT_NONE &&
(y0 < 0 || y0 >= height || x0 < 0 || x0 >= width))
{
buffer[i] = 0;
}
else
{
uint32_t mask = PIXMAN_FORMAT_A (format)? 0 : 0xff000000;
if (repeat_mode != PIXMAN_REPEAT_NONE)
{
repeat (repeat_mode, width, &x0);
repeat (repeat_mode, height, &y0);
}
row = (uint8_t *)bits->bits + bits->rowstride * 4 * y0;
buffer[i] = convert_pixel (row, x0) | mask;
}
next:
x += ux;
y += uy;
}
}
static force_inline uint32_t
convert_a8r8g8b8 (const uint8_t *row, int x)
{
@ -940,29 +1008,49 @@ convert_r5g6b5 (const uint8_t *row, int x)
uint32_t * buffer, \
const uint32_t * mask) \
{ \
bits_image_fetch_bilinear_affine (image, offset, line, width, buffer, mask, \
bits_image_fetch_bilinear_affine (image, offset, line, \
width, buffer, mask, \
convert_ ## format, \
PIXMAN_ ## format, \
repeat_mode); \
} \
extern int no_such_variable
}
MAKE_BILINEAR_FETCHER (pad_a8r8g8b8, a8r8g8b8, PIXMAN_REPEAT_PAD);
MAKE_BILINEAR_FETCHER (none_a8r8g8b8, a8r8g8b8, PIXMAN_REPEAT_NONE);
MAKE_BILINEAR_FETCHER (reflect_a8r8g8b8, a8r8g8b8, PIXMAN_REPEAT_REFLECT);
MAKE_BILINEAR_FETCHER (normal_a8r8g8b8, a8r8g8b8, PIXMAN_REPEAT_NORMAL);
MAKE_BILINEAR_FETCHER (pad_x8r8g8b8, x8r8g8b8, PIXMAN_REPEAT_PAD);
MAKE_BILINEAR_FETCHER (none_x8r8g8b8, x8r8g8b8, PIXMAN_REPEAT_NONE);
MAKE_BILINEAR_FETCHER (reflect_x8r8g8b8, x8r8g8b8, PIXMAN_REPEAT_REFLECT);
MAKE_BILINEAR_FETCHER (normal_x8r8g8b8, x8r8g8b8, PIXMAN_REPEAT_NORMAL);
MAKE_BILINEAR_FETCHER (pad_a8, a8, PIXMAN_REPEAT_PAD);
MAKE_BILINEAR_FETCHER (none_a8, a8, PIXMAN_REPEAT_NONE);
MAKE_BILINEAR_FETCHER (reflect_a8, a8, PIXMAN_REPEAT_REFLECT);
MAKE_BILINEAR_FETCHER (normal_a8, a8, PIXMAN_REPEAT_NORMAL);
MAKE_BILINEAR_FETCHER (pad_r5g6b5, r5g6b5, PIXMAN_REPEAT_PAD);
MAKE_BILINEAR_FETCHER (none_r5g6b5, r5g6b5, PIXMAN_REPEAT_NONE);
MAKE_BILINEAR_FETCHER (reflect_r5g6b5, r5g6b5, PIXMAN_REPEAT_REFLECT);
MAKE_BILINEAR_FETCHER (normal_r5g6b5, r5g6b5, PIXMAN_REPEAT_NORMAL);
#define MAKE_NEAREST_FETCHER(name, format, repeat_mode) \
static void \
bits_image_fetch_nearest_affine_ ## name (pixman_image_t *image, \
int offset, \
int line, \
int width, \
uint32_t * buffer, \
const uint32_t * mask) \
{ \
bits_image_fetch_nearest_affine (image, offset, line, \
width, buffer, mask, \
convert_ ## format, \
PIXMAN_ ## format, \
repeat_mode); \
}
#define MAKE_FETCHERS(name, format, repeat_mode) \
MAKE_NEAREST_FETCHER (name, format, repeat_mode) \
MAKE_BILINEAR_FETCHER (name, format, repeat_mode)
MAKE_FETCHERS (pad_a8r8g8b8, a8r8g8b8, PIXMAN_REPEAT_PAD)
MAKE_FETCHERS (none_a8r8g8b8, a8r8g8b8, PIXMAN_REPEAT_NONE)
MAKE_FETCHERS (reflect_a8r8g8b8, a8r8g8b8, PIXMAN_REPEAT_REFLECT)
MAKE_FETCHERS (normal_a8r8g8b8, a8r8g8b8, PIXMAN_REPEAT_NORMAL)
MAKE_FETCHERS (pad_x8r8g8b8, x8r8g8b8, PIXMAN_REPEAT_PAD)
MAKE_FETCHERS (none_x8r8g8b8, x8r8g8b8, PIXMAN_REPEAT_NONE)
MAKE_FETCHERS (reflect_x8r8g8b8, x8r8g8b8, PIXMAN_REPEAT_REFLECT)
MAKE_FETCHERS (normal_x8r8g8b8, x8r8g8b8, PIXMAN_REPEAT_NORMAL)
MAKE_FETCHERS (pad_a8, a8, PIXMAN_REPEAT_PAD)
MAKE_FETCHERS (none_a8, a8, PIXMAN_REPEAT_NONE)
MAKE_FETCHERS (reflect_a8, a8, PIXMAN_REPEAT_REFLECT)
MAKE_FETCHERS (normal_a8, a8, PIXMAN_REPEAT_NORMAL)
MAKE_FETCHERS (pad_r5g6b5, r5g6b5, PIXMAN_REPEAT_PAD)
MAKE_FETCHERS (none_r5g6b5, r5g6b5, PIXMAN_REPEAT_NONE)
MAKE_FETCHERS (reflect_r5g6b5, r5g6b5, PIXMAN_REPEAT_REFLECT)
MAKE_FETCHERS (normal_r5g6b5, r5g6b5, PIXMAN_REPEAT_NORMAL)
static void
bits_image_fetch_solid_32 (pixman_image_t * image,
@ -1176,6 +1264,13 @@ static const fetcher_info_t fetcher_info[] =
FAST_PATH_AFFINE_TRANSFORM | \
FAST_PATH_BILINEAR_FILTER)
#define GENERAL_NEAREST_FLAGS \
(FAST_PATH_NO_ALPHA_MAP | \
FAST_PATH_NO_ACCESSORS | \
FAST_PATH_HAS_TRANSFORM | \
FAST_PATH_AFFINE_TRANSFORM | \
FAST_PATH_NEAREST_FILTER)
#define BILINEAR_AFFINE_FAST_PATH(name, format, repeat) \
{ PIXMAN_ ## format, \
GENERAL_BILINEAR_FLAGS | FAST_PATH_ ## repeat ## _REPEAT, \
@ -1183,22 +1278,33 @@ static const fetcher_info_t fetcher_info[] =
_pixman_image_get_scanline_generic_64 \
},
BILINEAR_AFFINE_FAST_PATH (pad_a8r8g8b8, a8r8g8b8, PAD)
BILINEAR_AFFINE_FAST_PATH (none_a8r8g8b8, a8r8g8b8, NONE)
BILINEAR_AFFINE_FAST_PATH (reflect_a8r8g8b8, a8r8g8b8, REFLECT)
BILINEAR_AFFINE_FAST_PATH (normal_a8r8g8b8, a8r8g8b8, NORMAL)
BILINEAR_AFFINE_FAST_PATH (pad_x8r8g8b8, x8r8g8b8, PAD)
BILINEAR_AFFINE_FAST_PATH (none_x8r8g8b8, x8r8g8b8, NONE)
BILINEAR_AFFINE_FAST_PATH (reflect_x8r8g8b8, x8r8g8b8, REFLECT)
BILINEAR_AFFINE_FAST_PATH (normal_x8r8g8b8, x8r8g8b8, NORMAL)
BILINEAR_AFFINE_FAST_PATH (pad_a8, a8, PAD)
BILINEAR_AFFINE_FAST_PATH (none_a8, a8, NONE)
BILINEAR_AFFINE_FAST_PATH (reflect_a8, a8, REFLECT)
BILINEAR_AFFINE_FAST_PATH (normal_a8, a8, NORMAL)
BILINEAR_AFFINE_FAST_PATH (pad_r5g6b5, r5g6b5, PAD)
BILINEAR_AFFINE_FAST_PATH (none_r5g6b5, r5g6b5, NONE)
BILINEAR_AFFINE_FAST_PATH (reflect_r5g6b5, r5g6b5, REFLECT)
BILINEAR_AFFINE_FAST_PATH (normal_r5g6b5, r5g6b5, NORMAL)
#define NEAREST_AFFINE_FAST_PATH(name, format, repeat) \
{ PIXMAN_ ## format, \
GENERAL_NEAREST_FLAGS | FAST_PATH_ ## repeat ## _REPEAT, \
bits_image_fetch_nearest_affine_ ## name, \
_pixman_image_get_scanline_generic_64 \
},
#define AFFINE_FAST_PATHS(name, format, repeat) \
BILINEAR_AFFINE_FAST_PATH(name, format, repeat) \
NEAREST_AFFINE_FAST_PATH(name, format, repeat)
AFFINE_FAST_PATHS (pad_a8r8g8b8, a8r8g8b8, PAD)
AFFINE_FAST_PATHS (none_a8r8g8b8, a8r8g8b8, NONE)
AFFINE_FAST_PATHS (reflect_a8r8g8b8, a8r8g8b8, REFLECT)
AFFINE_FAST_PATHS (normal_a8r8g8b8, a8r8g8b8, NORMAL)
AFFINE_FAST_PATHS (pad_x8r8g8b8, x8r8g8b8, PAD)
AFFINE_FAST_PATHS (none_x8r8g8b8, x8r8g8b8, NONE)
AFFINE_FAST_PATHS (reflect_x8r8g8b8, x8r8g8b8, REFLECT)
AFFINE_FAST_PATHS (normal_x8r8g8b8, x8r8g8b8, NORMAL)
AFFINE_FAST_PATHS (pad_a8, a8, PAD)
AFFINE_FAST_PATHS (none_a8, a8, NONE)
AFFINE_FAST_PATHS (reflect_a8, a8, REFLECT)
AFFINE_FAST_PATHS (normal_a8, a8, NORMAL)
AFFINE_FAST_PATHS (pad_r5g6b5, r5g6b5, PAD)
AFFINE_FAST_PATHS (none_r5g6b5, r5g6b5, NONE)
AFFINE_FAST_PATHS (reflect_r5g6b5, r5g6b5, REFLECT)
AFFINE_FAST_PATHS (normal_r5g6b5, r5g6b5, NORMAL)
/* Affine, no alpha */
{ PIXMAN_any,
@ -1228,8 +1334,8 @@ bits_image_property_changed (pixman_image_t *image)
if ((info->format == format || info->format == PIXMAN_any) &&
(info->flags & flags) == info->flags)
{
image->common.get_scanline_32 = info->fetch_32;
image->common.get_scanline_64 = info->fetch_64;
image->bits.get_scanline_32 = info->fetch_32;
image->bits.get_scanline_64 = info->fetch_64;
break;
}
@ -1237,6 +1343,170 @@ bits_image_property_changed (pixman_image_t *image)
}
}
static uint32_t *
src_get_scanline_narrow (pixman_iter_t *iter, const uint32_t *mask)
{
iter->image->bits.get_scanline_32 (
iter->image, iter->x, iter->y++, iter->width, iter->buffer, mask);
return iter->buffer;
}
static uint32_t *
src_get_scanline_wide (pixman_iter_t *iter, const uint32_t *mask)
{
iter->image->bits.get_scanline_64 (
iter->image, iter->x, iter->y++, iter->width, iter->buffer, mask);
return iter->buffer;
}
void
_pixman_bits_image_src_iter_init (pixman_image_t *image, pixman_iter_t *iter)
{
if (iter->flags & ITER_NARROW)
iter->get_scanline = src_get_scanline_narrow;
else
iter->get_scanline = src_get_scanline_wide;
}
static uint32_t *
dest_get_scanline_narrow (pixman_iter_t *iter, const uint32_t *mask)
{
pixman_image_t *image = iter->image;
int x = iter->x;
int y = iter->y;
int width = iter->width;
uint32_t * buffer = iter->buffer;
image->bits.fetch_scanline_32 (image, x, y, width, buffer, mask);
if (image->common.alpha_map)
{
x -= image->common.alpha_origin_x;
y -= image->common.alpha_origin_y;
image->common.alpha_map->fetch_scanline_32 (
(pixman_image_t *)image->common.alpha_map,
x, y, width, buffer, mask);
}
return iter->buffer;
}
static uint32_t *
dest_get_scanline_wide (pixman_iter_t *iter, const uint32_t *mask)
{
bits_image_t * image = &iter->image->bits;
int x = iter->x;
int y = iter->y;
int width = iter->width;
uint32_t * buffer = iter->buffer;
image->fetch_scanline_64 (
(pixman_image_t *)image, x, y, width, buffer, mask);
if (image->common.alpha_map)
{
x -= image->common.alpha_origin_x;
y -= image->common.alpha_origin_y;
image->common.alpha_map->fetch_scanline_64 (
(pixman_image_t *)image->common.alpha_map, x, y, width, buffer, mask);
}
return iter->buffer;
}
static void
dest_write_back_narrow (pixman_iter_t *iter)
{
bits_image_t * image = &iter->image->bits;
int x = iter->x;
int y = iter->y;
int width = iter->width;
const uint32_t *buffer = iter->buffer;
image->store_scanline_32 (image, x, y, width, buffer);
if (image->common.alpha_map)
{
x -= image->common.alpha_origin_x;
y -= image->common.alpha_origin_y;
image->common.alpha_map->store_scanline_32 (
image->common.alpha_map, x, y, width, buffer);
}
iter->y++;
}
static void
dest_write_back_wide (pixman_iter_t *iter)
{
bits_image_t * image = &iter->image->bits;
int x = iter->x;
int y = iter->y;
int width = iter->width;
const uint32_t *buffer = iter->buffer;
image->store_scanline_64 (image, x, y, width, buffer);
if (image->common.alpha_map)
{
x -= image->common.alpha_origin_x;
y -= image->common.alpha_origin_y;
image->common.alpha_map->store_scanline_64 (
image->common.alpha_map, x, y, width, buffer);
}
iter->y++;
}
static void
dest_write_back_direct (pixman_iter_t *iter)
{
iter->buffer += iter->image->bits.rowstride;
}
void
_pixman_bits_image_dest_iter_init (pixman_image_t *image, pixman_iter_t *iter)
{
if (iter->flags & ITER_NARROW)
{
if (((image->common.flags &
(FAST_PATH_NO_ALPHA_MAP | FAST_PATH_NO_ACCESSORS)) ==
(FAST_PATH_NO_ALPHA_MAP | FAST_PATH_NO_ACCESSORS)) &&
(image->bits.format == PIXMAN_a8r8g8b8 ||
(image->bits.format == PIXMAN_x8r8g8b8 &&
(iter->flags & ITER_LOCALIZED_ALPHA))))
{
iter->buffer = image->bits.bits + iter->y * image->bits.rowstride + iter->x;
iter->get_scanline = _pixman_iter_get_scanline_noop;
iter->write_back = dest_write_back_direct;
}
else
{
if ((iter->flags & (ITER_IGNORE_RGB | ITER_IGNORE_ALPHA)) ==
(ITER_IGNORE_RGB | ITER_IGNORE_ALPHA))
{
iter->get_scanline = _pixman_iter_get_scanline_noop;
}
else
{
iter->get_scanline = dest_get_scanline_narrow;
}
iter->write_back = dest_write_back_narrow;
}
}
else
{
iter->get_scanline = dest_get_scanline_wide;
iter->write_back = dest_write_back_wide;
}
}
static uint32_t *
create_bits (pixman_format_code_t format,
int width,

Просмотреть файл

@ -132,6 +132,17 @@ combine_clear (pixman_implementation_t *imp,
memset (dest, 0, width * sizeof(comp4_t));
}
static void
combine_dst (pixman_implementation_t *imp,
pixman_op_t op,
comp4_t * dest,
const comp4_t * src,
const comp4_t * mask,
int width)
{
return;
}
static void
combine_src_u (pixman_implementation_t *imp,
pixman_op_t op,
@ -948,15 +959,33 @@ set_lum (comp4_t dest[3], comp4_t src[3], comp4_t sa, comp4_t lum)
if (min < 0)
{
tmp[0] = l + (tmp[0] - l) * l / (l - min);
tmp[1] = l + (tmp[1] - l) * l / (l - min);
tmp[2] = l + (tmp[2] - l) * l / (l - min);
if (l - min == 0.0)
{
tmp[0] = 0;
tmp[1] = 0;
tmp[2] = 0;
}
else
{
tmp[0] = l + (tmp[0] - l) * l / (l - min);
tmp[1] = l + (tmp[1] - l) * l / (l - min);
tmp[2] = l + (tmp[2] - l) * l / (l - min);
}
}
if (max > a)
{
tmp[0] = l + (tmp[0] - l) * (a - l) / (max - l);
tmp[1] = l + (tmp[1] - l) * (a - l) / (max - l);
tmp[2] = l + (tmp[2] - l) * (a - l) / (max - l);
if (max - l == 0.0)
{
tmp[0] = a;
tmp[1] = a;
tmp[2] = a;
}
else
{
tmp[0] = l + (tmp[0] - l) * (a - l) / (max - l);
tmp[1] = l + (tmp[1] - l) * (a - l) / (max - l);
tmp[2] = l + (tmp[2] - l) * (a - l) / (max - l);
}
}
dest[0] = tmp[0] * MASK + 0.5;
@ -1296,17 +1325,13 @@ combine_disjoint_over_u (pixman_implementation_t *imp,
comp4_t s = combine_mask (src, mask, i);
comp2_t a = s >> A_SHIFT;
if (a != 0x00)
if (s != 0x00)
{
if (a != MASK)
{
comp4_t d = *(dest + i);
a = combine_disjoint_out_part (d >> A_SHIFT, a);
UNcx4_MUL_UNc_ADD_UNcx4 (d, a, s);
s = d;
}
comp4_t d = *(dest + i);
a = combine_disjoint_out_part (d >> A_SHIFT, a);
UNcx4_MUL_UNc_ADD_UNcx4 (d, a, s);
*(dest + i) = s;
*(dest + i) = d;
}
}
}
@ -2314,7 +2339,7 @@ _pixman_setup_combiner_functions_width (pixman_implementation_t *imp)
/* Unified alpha */
imp->combine_width[PIXMAN_OP_CLEAR] = combine_clear;
imp->combine_width[PIXMAN_OP_SRC] = combine_src_u;
/* dest */
imp->combine_width[PIXMAN_OP_DST] = combine_dst;
imp->combine_width[PIXMAN_OP_OVER] = combine_over_u;
imp->combine_width[PIXMAN_OP_OVER_REVERSE] = combine_over_reverse_u;
imp->combine_width[PIXMAN_OP_IN] = combine_in_u;
@ -2330,7 +2355,7 @@ _pixman_setup_combiner_functions_width (pixman_implementation_t *imp)
/* Disjoint, unified */
imp->combine_width[PIXMAN_OP_DISJOINT_CLEAR] = combine_clear;
imp->combine_width[PIXMAN_OP_DISJOINT_SRC] = combine_src_u;
/* dest */
imp->combine_width[PIXMAN_OP_DISJOINT_DST] = combine_dst;
imp->combine_width[PIXMAN_OP_DISJOINT_OVER] = combine_disjoint_over_u;
imp->combine_width[PIXMAN_OP_DISJOINT_OVER_REVERSE] = combine_saturate_u;
imp->combine_width[PIXMAN_OP_DISJOINT_IN] = combine_disjoint_in_u;
@ -2344,7 +2369,7 @@ _pixman_setup_combiner_functions_width (pixman_implementation_t *imp)
/* Conjoint, unified */
imp->combine_width[PIXMAN_OP_CONJOINT_CLEAR] = combine_clear;
imp->combine_width[PIXMAN_OP_CONJOINT_SRC] = combine_src_u;
/* dest */
imp->combine_width[PIXMAN_OP_CONJOINT_DST] = combine_dst;
imp->combine_width[PIXMAN_OP_CONJOINT_OVER] = combine_conjoint_over_u;
imp->combine_width[PIXMAN_OP_CONJOINT_OVER_REVERSE] = combine_conjoint_over_reverse_u;
imp->combine_width[PIXMAN_OP_CONJOINT_IN] = combine_conjoint_in_u;
@ -2390,7 +2415,7 @@ _pixman_setup_combiner_functions_width (pixman_implementation_t *imp)
/* Disjoint CA */
imp->combine_width_ca[PIXMAN_OP_DISJOINT_CLEAR] = combine_clear_ca;
imp->combine_width_ca[PIXMAN_OP_DISJOINT_SRC] = combine_src_ca;
/* dest */
imp->combine_width_ca[PIXMAN_OP_DISJOINT_DST] = combine_dst;
imp->combine_width_ca[PIXMAN_OP_DISJOINT_OVER] = combine_disjoint_over_ca;
imp->combine_width_ca[PIXMAN_OP_DISJOINT_OVER_REVERSE] = combine_saturate_ca;
imp->combine_width_ca[PIXMAN_OP_DISJOINT_IN] = combine_disjoint_in_ca;
@ -2404,7 +2429,7 @@ _pixman_setup_combiner_functions_width (pixman_implementation_t *imp)
/* Conjoint CA */
imp->combine_width_ca[PIXMAN_OP_CONJOINT_CLEAR] = combine_clear_ca;
imp->combine_width_ca[PIXMAN_OP_CONJOINT_SRC] = combine_src_ca;
/* dest */
imp->combine_width_ca[PIXMAN_OP_CONJOINT_DST] = combine_dst;
imp->combine_width_ca[PIXMAN_OP_CONJOINT_OVER] = combine_conjoint_over_ca;
imp->combine_width_ca[PIXMAN_OP_CONJOINT_OVER_REVERSE] = combine_conjoint_over_reverse_ca;
imp->combine_width_ca[PIXMAN_OP_CONJOINT_IN] = combine_conjoint_in_ca;
@ -2427,10 +2452,10 @@ _pixman_setup_combiner_functions_width (pixman_implementation_t *imp)
imp->combine_width_ca[PIXMAN_OP_DIFFERENCE] = combine_difference_ca;
imp->combine_width_ca[PIXMAN_OP_EXCLUSION] = combine_exclusion_ca;
/* It is not clear that these make sense, so leave them out for now */
imp->combine_width_ca[PIXMAN_OP_HSL_HUE] = NULL;
imp->combine_width_ca[PIXMAN_OP_HSL_SATURATION] = NULL;
imp->combine_width_ca[PIXMAN_OP_HSL_COLOR] = NULL;
imp->combine_width_ca[PIXMAN_OP_HSL_LUMINOSITY] = NULL;
/* It is not clear that these make sense, so make them noops for now */
imp->combine_width_ca[PIXMAN_OP_HSL_HUE] = combine_dst;
imp->combine_width_ca[PIXMAN_OP_HSL_SATURATION] = combine_dst;
imp->combine_width_ca[PIXMAN_OP_HSL_COLOR] = combine_dst;
imp->combine_width_ca[PIXMAN_OP_HSL_LUMINOSITY] = combine_dst;
}

Просмотреть файл

@ -31,7 +31,7 @@
(((comp2_t) (a) * MASK) / (b))
#define ADD_UNc(x, y, t) \
((t) = x + y, \
((t) = (x) + (y), \
(comp4_t) (comp1_t) ((t) | (0 - ((t) >> G_SHIFT))))
#define DIV_ONE_UNc(x) \
@ -84,15 +84,15 @@
#define UNcx4_MUL_UNc(x, a) \
do \
{ \
comp4_t r1, r2, t; \
comp4_t r1__, r2__, t__; \
\
r1 = (x); \
UNc_rb_MUL_UNc (r1, a, t); \
r1__ = (x); \
UNc_rb_MUL_UNc (r1__, (a), t__); \
\
r2 = (x) >> G_SHIFT; \
UNc_rb_MUL_UNc (r2, a, t); \
r2__ = (x) >> G_SHIFT; \
UNc_rb_MUL_UNc (r2__, (a), t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -101,19 +101,19 @@
#define UNcx4_MUL_UNc_ADD_UNcx4(x, a, y) \
do \
{ \
comp4_t r1, r2, r3, t; \
comp4_t r1__, r2__, r3__, t__; \
\
r1 = (x); \
r2 = (y) & RB_MASK; \
UNc_rb_MUL_UNc (r1, a, t); \
UNc_rb_ADD_UNc_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (y) & RB_MASK; \
UNc_rb_MUL_UNc (r1__, (a), t__); \
UNc_rb_ADD_UNc_rb (r1__, r2__, t__); \
\
r2 = (x) >> G_SHIFT; \
r3 = ((y) >> G_SHIFT) & RB_MASK; \
UNc_rb_MUL_UNc (r2, a, t); \
UNc_rb_ADD_UNc_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UNc_rb_MUL_UNc (r2__, (a), t__); \
UNc_rb_ADD_UNc_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -122,21 +122,21 @@
#define UNcx4_MUL_UNc_ADD_UNcx4_MUL_UNc(x, a, y, b) \
do \
{ \
comp4_t r1, r2, r3, t; \
comp4_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = y; \
UNc_rb_MUL_UNc (r1, a, t); \
UNc_rb_MUL_UNc (r2, b, t); \
UNc_rb_ADD_UNc_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (y); \
UNc_rb_MUL_UNc (r1__, (a), t__); \
UNc_rb_MUL_UNc (r2__, (b), t__); \
UNc_rb_ADD_UNc_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT); \
r3 = (y >> G_SHIFT); \
UNc_rb_MUL_UNc (r2, a, t); \
UNc_rb_MUL_UNc (r3, b, t); \
UNc_rb_ADD_UNc_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT); \
r3__ = ((y) >> G_SHIFT); \
UNc_rb_MUL_UNc (r2__, (a), t__); \
UNc_rb_MUL_UNc (r3__, (b), t__); \
UNc_rb_ADD_UNc_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -145,17 +145,17 @@
#define UNcx4_MUL_UNcx4(x, a) \
do \
{ \
comp4_t r1, r2, r3, t; \
comp4_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UNc_rb_MUL_UNc_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UNc_rb_MUL_UNc_rb (r1__, r2__, t__); \
\
r2 = x >> G_SHIFT; \
r3 = a >> G_SHIFT; \
UNc_rb_MUL_UNc_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = (a) >> G_SHIFT; \
UNc_rb_MUL_UNc_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -164,21 +164,21 @@
#define UNcx4_MUL_UNcx4_ADD_UNcx4(x, a, y) \
do \
{ \
comp4_t r1, r2, r3, t; \
comp4_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UNc_rb_MUL_UNc_rb (r1, r2, t); \
r2 = y & RB_MASK; \
UNc_rb_ADD_UNc_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UNc_rb_MUL_UNc_rb (r1__, r2__, t__); \
r2__ = (y) & RB_MASK; \
UNc_rb_ADD_UNc_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT); \
r3 = (a >> G_SHIFT); \
UNc_rb_MUL_UNc_rb (r2, r3, t); \
r3 = (y >> G_SHIFT) & RB_MASK; \
UNc_rb_ADD_UNc_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT); \
r3__ = ((a) >> G_SHIFT); \
UNc_rb_MUL_UNc_rb (r2__, r3__, t__); \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UNc_rb_ADD_UNc_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -187,40 +187,40 @@
#define UNcx4_MUL_UNcx4_ADD_UNcx4_MUL_UNc(x, a, y, b) \
do \
{ \
comp4_t r1, r2, r3, t; \
comp4_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UNc_rb_MUL_UNc_rb (r1, r2, t); \
r2 = y; \
UNc_rb_MUL_UNc (r2, b, t); \
UNc_rb_ADD_UNc_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UNc_rb_MUL_UNc_rb (r1__, r2__, t__); \
r2__ = (y); \
UNc_rb_MUL_UNc (r2__, (b), t__); \
UNc_rb_ADD_UNc_rb (r1__, r2__, t__); \
\
r2 = x >> G_SHIFT; \
r3 = a >> G_SHIFT; \
UNc_rb_MUL_UNc_rb (r2, r3, t); \
r3 = y >> G_SHIFT; \
UNc_rb_MUL_UNc (r3, b, t); \
UNc_rb_ADD_UNc_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = (a) >> G_SHIFT; \
UNc_rb_MUL_UNc_rb (r2__, r3__, t__); \
r3__ = (y) >> G_SHIFT; \
UNc_rb_MUL_UNc (r3__, (b), t__); \
UNc_rb_ADD_UNc_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
x = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
x_c = min(x_c + y_c, 255)
*/
x_c = min(x_c + y_c, 255)
*/
#define UNcx4_ADD_UNcx4(x, y) \
do \
{ \
comp4_t r1, r2, r3, t; \
comp4_t r1__, r2__, r3__, t__; \
\
r1 = x & RB_MASK; \
r2 = y & RB_MASK; \
UNc_rb_ADD_UNc_rb (r1, r2, t); \
r1__ = (x) & RB_MASK; \
r2__ = (y) & RB_MASK; \
UNc_rb_ADD_UNc_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT) & RB_MASK; \
r3 = (y >> G_SHIFT) & RB_MASK; \
UNc_rb_ADD_UNc_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT) & RB_MASK; \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UNc_rb_ADD_UNc_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
x = r1__ | (r2__ << G_SHIFT); \
} while (0)

Просмотреть файл

@ -136,6 +136,17 @@ combine_clear (pixman_implementation_t *imp,
memset (dest, 0, width * sizeof(uint32_t));
}
static void
combine_dst (pixman_implementation_t *imp,
pixman_op_t op,
uint32_t * dest,
const uint32_t * src,
const uint32_t * mask,
int width)
{
return;
}
static void
combine_src_u (pixman_implementation_t *imp,
pixman_op_t op,
@ -1300,17 +1311,13 @@ combine_disjoint_over_u (pixman_implementation_t *imp,
uint32_t s = combine_mask (src, mask, i);
uint16_t a = s >> A_SHIFT;
if (a != 0x00)
if (s != 0x00)
{
if (a != MASK)
{
uint32_t d = *(dest + i);
a = combine_disjoint_out_part (d >> A_SHIFT, a);
UN8x4_MUL_UN8_ADD_UN8x4 (d, a, s);
s = d;
}
uint32_t d = *(dest + i);
a = combine_disjoint_out_part (d >> A_SHIFT, a);
UN8x4_MUL_UN8_ADD_UN8x4 (d, a, s);
*(dest + i) = s;
*(dest + i) = d;
}
}
}
@ -2318,7 +2325,7 @@ _pixman_setup_combiner_functions_32 (pixman_implementation_t *imp)
/* Unified alpha */
imp->combine_32[PIXMAN_OP_CLEAR] = combine_clear;
imp->combine_32[PIXMAN_OP_SRC] = combine_src_u;
/* dest */
imp->combine_32[PIXMAN_OP_DST] = combine_dst;
imp->combine_32[PIXMAN_OP_OVER] = combine_over_u;
imp->combine_32[PIXMAN_OP_OVER_REVERSE] = combine_over_reverse_u;
imp->combine_32[PIXMAN_OP_IN] = combine_in_u;
@ -2334,7 +2341,7 @@ _pixman_setup_combiner_functions_32 (pixman_implementation_t *imp)
/* Disjoint, unified */
imp->combine_32[PIXMAN_OP_DISJOINT_CLEAR] = combine_clear;
imp->combine_32[PIXMAN_OP_DISJOINT_SRC] = combine_src_u;
/* dest */
imp->combine_32[PIXMAN_OP_DISJOINT_DST] = combine_dst;
imp->combine_32[PIXMAN_OP_DISJOINT_OVER] = combine_disjoint_over_u;
imp->combine_32[PIXMAN_OP_DISJOINT_OVER_REVERSE] = combine_saturate_u;
imp->combine_32[PIXMAN_OP_DISJOINT_IN] = combine_disjoint_in_u;
@ -2348,7 +2355,7 @@ _pixman_setup_combiner_functions_32 (pixman_implementation_t *imp)
/* Conjoint, unified */
imp->combine_32[PIXMAN_OP_CONJOINT_CLEAR] = combine_clear;
imp->combine_32[PIXMAN_OP_CONJOINT_SRC] = combine_src_u;
/* dest */
imp->combine_32[PIXMAN_OP_CONJOINT_DST] = combine_dst;
imp->combine_32[PIXMAN_OP_CONJOINT_OVER] = combine_conjoint_over_u;
imp->combine_32[PIXMAN_OP_CONJOINT_OVER_REVERSE] = combine_conjoint_over_reverse_u;
imp->combine_32[PIXMAN_OP_CONJOINT_IN] = combine_conjoint_in_u;
@ -2394,7 +2401,7 @@ _pixman_setup_combiner_functions_32 (pixman_implementation_t *imp)
/* Disjoint CA */
imp->combine_32_ca[PIXMAN_OP_DISJOINT_CLEAR] = combine_clear_ca;
imp->combine_32_ca[PIXMAN_OP_DISJOINT_SRC] = combine_src_ca;
/* dest */
imp->combine_32_ca[PIXMAN_OP_DISJOINT_DST] = combine_dst;
imp->combine_32_ca[PIXMAN_OP_DISJOINT_OVER] = combine_disjoint_over_ca;
imp->combine_32_ca[PIXMAN_OP_DISJOINT_OVER_REVERSE] = combine_saturate_ca;
imp->combine_32_ca[PIXMAN_OP_DISJOINT_IN] = combine_disjoint_in_ca;
@ -2408,7 +2415,7 @@ _pixman_setup_combiner_functions_32 (pixman_implementation_t *imp)
/* Conjoint CA */
imp->combine_32_ca[PIXMAN_OP_CONJOINT_CLEAR] = combine_clear_ca;
imp->combine_32_ca[PIXMAN_OP_CONJOINT_SRC] = combine_src_ca;
/* dest */
imp->combine_32_ca[PIXMAN_OP_CONJOINT_DST] = combine_dst;
imp->combine_32_ca[PIXMAN_OP_CONJOINT_OVER] = combine_conjoint_over_ca;
imp->combine_32_ca[PIXMAN_OP_CONJOINT_OVER_REVERSE] = combine_conjoint_over_reverse_ca;
imp->combine_32_ca[PIXMAN_OP_CONJOINT_IN] = combine_conjoint_in_ca;
@ -2431,10 +2438,10 @@ _pixman_setup_combiner_functions_32 (pixman_implementation_t *imp)
imp->combine_32_ca[PIXMAN_OP_DIFFERENCE] = combine_difference_ca;
imp->combine_32_ca[PIXMAN_OP_EXCLUSION] = combine_exclusion_ca;
/* It is not clear that these make sense, so leave them out for now */
imp->combine_32_ca[PIXMAN_OP_HSL_HUE] = NULL;
imp->combine_32_ca[PIXMAN_OP_HSL_SATURATION] = NULL;
imp->combine_32_ca[PIXMAN_OP_HSL_COLOR] = NULL;
imp->combine_32_ca[PIXMAN_OP_HSL_LUMINOSITY] = NULL;
/* It is not clear that these make sense, so make them noops for now */
imp->combine_32_ca[PIXMAN_OP_HSL_HUE] = combine_dst;
imp->combine_32_ca[PIXMAN_OP_HSL_SATURATION] = combine_dst;
imp->combine_32_ca[PIXMAN_OP_HSL_COLOR] = combine_dst;
imp->combine_32_ca[PIXMAN_OP_HSL_LUMINOSITY] = combine_dst;
}

Просмотреть файл

@ -35,7 +35,7 @@
(((uint16_t) (a) * MASK) / (b))
#define ADD_UN8(x, y, t) \
((t) = x + y, \
((t) = (x) + (y), \
(uint32_t) (uint8_t) ((t) | (0 - ((t) >> G_SHIFT))))
#define DIV_ONE_UN8(x) \
@ -88,15 +88,15 @@
#define UN8x4_MUL_UN8(x, a) \
do \
{ \
uint32_t r1, r2, t; \
uint32_t r1__, r2__, t__; \
\
r1 = (x); \
UN8_rb_MUL_UN8 (r1, a, t); \
r1__ = (x); \
UN8_rb_MUL_UN8 (r1__, (a), t__); \
\
r2 = (x) >> G_SHIFT; \
UN8_rb_MUL_UN8 (r2, a, t); \
r2__ = (x) >> G_SHIFT; \
UN8_rb_MUL_UN8 (r2__, (a), t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -105,19 +105,19 @@
#define UN8x4_MUL_UN8_ADD_UN8x4(x, a, y) \
do \
{ \
uint32_t r1, r2, r3, t; \
uint32_t r1__, r2__, r3__, t__; \
\
r1 = (x); \
r2 = (y) & RB_MASK; \
UN8_rb_MUL_UN8 (r1, a, t); \
UN8_rb_ADD_UN8_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (y) & RB_MASK; \
UN8_rb_MUL_UN8 (r1__, (a), t__); \
UN8_rb_ADD_UN8_rb (r1__, r2__, t__); \
\
r2 = (x) >> G_SHIFT; \
r3 = ((y) >> G_SHIFT) & RB_MASK; \
UN8_rb_MUL_UN8 (r2, a, t); \
UN8_rb_ADD_UN8_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UN8_rb_MUL_UN8 (r2__, (a), t__); \
UN8_rb_ADD_UN8_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -126,21 +126,21 @@
#define UN8x4_MUL_UN8_ADD_UN8x4_MUL_UN8(x, a, y, b) \
do \
{ \
uint32_t r1, r2, r3, t; \
uint32_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = y; \
UN8_rb_MUL_UN8 (r1, a, t); \
UN8_rb_MUL_UN8 (r2, b, t); \
UN8_rb_ADD_UN8_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (y); \
UN8_rb_MUL_UN8 (r1__, (a), t__); \
UN8_rb_MUL_UN8 (r2__, (b), t__); \
UN8_rb_ADD_UN8_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT); \
r3 = (y >> G_SHIFT); \
UN8_rb_MUL_UN8 (r2, a, t); \
UN8_rb_MUL_UN8 (r3, b, t); \
UN8_rb_ADD_UN8_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT); \
r3__ = ((y) >> G_SHIFT); \
UN8_rb_MUL_UN8 (r2__, (a), t__); \
UN8_rb_MUL_UN8 (r3__, (b), t__); \
UN8_rb_ADD_UN8_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -149,17 +149,17 @@
#define UN8x4_MUL_UN8x4(x, a) \
do \
{ \
uint32_t r1, r2, r3, t; \
uint32_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UN8_rb_MUL_UN8_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UN8_rb_MUL_UN8_rb (r1__, r2__, t__); \
\
r2 = x >> G_SHIFT; \
r3 = a >> G_SHIFT; \
UN8_rb_MUL_UN8_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = (a) >> G_SHIFT; \
UN8_rb_MUL_UN8_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -168,21 +168,21 @@
#define UN8x4_MUL_UN8x4_ADD_UN8x4(x, a, y) \
do \
{ \
uint32_t r1, r2, r3, t; \
uint32_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UN8_rb_MUL_UN8_rb (r1, r2, t); \
r2 = y & RB_MASK; \
UN8_rb_ADD_UN8_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UN8_rb_MUL_UN8_rb (r1__, r2__, t__); \
r2__ = (y) & RB_MASK; \
UN8_rb_ADD_UN8_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT); \
r3 = (a >> G_SHIFT); \
UN8_rb_MUL_UN8_rb (r2, r3, t); \
r3 = (y >> G_SHIFT) & RB_MASK; \
UN8_rb_ADD_UN8_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT); \
r3__ = ((a) >> G_SHIFT); \
UN8_rb_MUL_UN8_rb (r2__, r3__, t__); \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UN8_rb_ADD_UN8_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -191,40 +191,40 @@
#define UN8x4_MUL_UN8x4_ADD_UN8x4_MUL_UN8(x, a, y, b) \
do \
{ \
uint32_t r1, r2, r3, t; \
uint32_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UN8_rb_MUL_UN8_rb (r1, r2, t); \
r2 = y; \
UN8_rb_MUL_UN8 (r2, b, t); \
UN8_rb_ADD_UN8_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UN8_rb_MUL_UN8_rb (r1__, r2__, t__); \
r2__ = (y); \
UN8_rb_MUL_UN8 (r2__, (b), t__); \
UN8_rb_ADD_UN8_rb (r1__, r2__, t__); \
\
r2 = x >> G_SHIFT; \
r3 = a >> G_SHIFT; \
UN8_rb_MUL_UN8_rb (r2, r3, t); \
r3 = y >> G_SHIFT; \
UN8_rb_MUL_UN8 (r3, b, t); \
UN8_rb_ADD_UN8_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = (a) >> G_SHIFT; \
UN8_rb_MUL_UN8_rb (r2__, r3__, t__); \
r3__ = (y) >> G_SHIFT; \
UN8_rb_MUL_UN8 (r3__, (b), t__); \
UN8_rb_ADD_UN8_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
x = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
x_c = min(x_c + y_c, 255)
*/
x_c = min(x_c + y_c, 255)
*/
#define UN8x4_ADD_UN8x4(x, y) \
do \
{ \
uint32_t r1, r2, r3, t; \
uint32_t r1__, r2__, r3__, t__; \
\
r1 = x & RB_MASK; \
r2 = y & RB_MASK; \
UN8_rb_ADD_UN8_rb (r1, r2, t); \
r1__ = (x) & RB_MASK; \
r2__ = (y) & RB_MASK; \
UN8_rb_ADD_UN8_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT) & RB_MASK; \
r3 = (y >> G_SHIFT) & RB_MASK; \
UN8_rb_ADD_UN8_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT) & RB_MASK; \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UN8_rb_ADD_UN8_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
x = r1__ | (r2__ << G_SHIFT); \
} while (0)

Просмотреть файл

@ -136,6 +136,17 @@ combine_clear (pixman_implementation_t *imp,
memset (dest, 0, width * sizeof(uint64_t));
}
static void
combine_dst (pixman_implementation_t *imp,
pixman_op_t op,
uint64_t * dest,
const uint64_t * src,
const uint64_t * mask,
int width)
{
return;
}
static void
combine_src_u (pixman_implementation_t *imp,
pixman_op_t op,
@ -1300,17 +1311,13 @@ combine_disjoint_over_u (pixman_implementation_t *imp,
uint64_t s = combine_mask (src, mask, i);
uint32_t a = s >> A_SHIFT;
if (a != 0x00)
if (s != 0x00)
{
if (a != MASK)
{
uint64_t d = *(dest + i);
a = combine_disjoint_out_part (d >> A_SHIFT, a);
UN16x4_MUL_UN16_ADD_UN16x4 (d, a, s);
s = d;
}
uint64_t d = *(dest + i);
a = combine_disjoint_out_part (d >> A_SHIFT, a);
UN16x4_MUL_UN16_ADD_UN16x4 (d, a, s);
*(dest + i) = s;
*(dest + i) = d;
}
}
}
@ -2318,7 +2325,7 @@ _pixman_setup_combiner_functions_64 (pixman_implementation_t *imp)
/* Unified alpha */
imp->combine_64[PIXMAN_OP_CLEAR] = combine_clear;
imp->combine_64[PIXMAN_OP_SRC] = combine_src_u;
/* dest */
imp->combine_64[PIXMAN_OP_DST] = combine_dst;
imp->combine_64[PIXMAN_OP_OVER] = combine_over_u;
imp->combine_64[PIXMAN_OP_OVER_REVERSE] = combine_over_reverse_u;
imp->combine_64[PIXMAN_OP_IN] = combine_in_u;
@ -2334,7 +2341,7 @@ _pixman_setup_combiner_functions_64 (pixman_implementation_t *imp)
/* Disjoint, unified */
imp->combine_64[PIXMAN_OP_DISJOINT_CLEAR] = combine_clear;
imp->combine_64[PIXMAN_OP_DISJOINT_SRC] = combine_src_u;
/* dest */
imp->combine_64[PIXMAN_OP_DISJOINT_DST] = combine_dst;
imp->combine_64[PIXMAN_OP_DISJOINT_OVER] = combine_disjoint_over_u;
imp->combine_64[PIXMAN_OP_DISJOINT_OVER_REVERSE] = combine_saturate_u;
imp->combine_64[PIXMAN_OP_DISJOINT_IN] = combine_disjoint_in_u;
@ -2348,7 +2355,7 @@ _pixman_setup_combiner_functions_64 (pixman_implementation_t *imp)
/* Conjoint, unified */
imp->combine_64[PIXMAN_OP_CONJOINT_CLEAR] = combine_clear;
imp->combine_64[PIXMAN_OP_CONJOINT_SRC] = combine_src_u;
/* dest */
imp->combine_64[PIXMAN_OP_CONJOINT_DST] = combine_dst;
imp->combine_64[PIXMAN_OP_CONJOINT_OVER] = combine_conjoint_over_u;
imp->combine_64[PIXMAN_OP_CONJOINT_OVER_REVERSE] = combine_conjoint_over_reverse_u;
imp->combine_64[PIXMAN_OP_CONJOINT_IN] = combine_conjoint_in_u;
@ -2394,7 +2401,7 @@ _pixman_setup_combiner_functions_64 (pixman_implementation_t *imp)
/* Disjoint CA */
imp->combine_64_ca[PIXMAN_OP_DISJOINT_CLEAR] = combine_clear_ca;
imp->combine_64_ca[PIXMAN_OP_DISJOINT_SRC] = combine_src_ca;
/* dest */
imp->combine_64_ca[PIXMAN_OP_DISJOINT_DST] = combine_dst;
imp->combine_64_ca[PIXMAN_OP_DISJOINT_OVER] = combine_disjoint_over_ca;
imp->combine_64_ca[PIXMAN_OP_DISJOINT_OVER_REVERSE] = combine_saturate_ca;
imp->combine_64_ca[PIXMAN_OP_DISJOINT_IN] = combine_disjoint_in_ca;
@ -2408,7 +2415,7 @@ _pixman_setup_combiner_functions_64 (pixman_implementation_t *imp)
/* Conjoint CA */
imp->combine_64_ca[PIXMAN_OP_CONJOINT_CLEAR] = combine_clear_ca;
imp->combine_64_ca[PIXMAN_OP_CONJOINT_SRC] = combine_src_ca;
/* dest */
imp->combine_64_ca[PIXMAN_OP_CONJOINT_DST] = combine_dst;
imp->combine_64_ca[PIXMAN_OP_CONJOINT_OVER] = combine_conjoint_over_ca;
imp->combine_64_ca[PIXMAN_OP_CONJOINT_OVER_REVERSE] = combine_conjoint_over_reverse_ca;
imp->combine_64_ca[PIXMAN_OP_CONJOINT_IN] = combine_conjoint_in_ca;
@ -2431,10 +2438,10 @@ _pixman_setup_combiner_functions_64 (pixman_implementation_t *imp)
imp->combine_64_ca[PIXMAN_OP_DIFFERENCE] = combine_difference_ca;
imp->combine_64_ca[PIXMAN_OP_EXCLUSION] = combine_exclusion_ca;
/* It is not clear that these make sense, so leave them out for now */
imp->combine_64_ca[PIXMAN_OP_HSL_HUE] = NULL;
imp->combine_64_ca[PIXMAN_OP_HSL_SATURATION] = NULL;
imp->combine_64_ca[PIXMAN_OP_HSL_COLOR] = NULL;
imp->combine_64_ca[PIXMAN_OP_HSL_LUMINOSITY] = NULL;
/* It is not clear that these make sense, so make them noops for now */
imp->combine_64_ca[PIXMAN_OP_HSL_HUE] = combine_dst;
imp->combine_64_ca[PIXMAN_OP_HSL_SATURATION] = combine_dst;
imp->combine_64_ca[PIXMAN_OP_HSL_COLOR] = combine_dst;
imp->combine_64_ca[PIXMAN_OP_HSL_LUMINOSITY] = combine_dst;
}

Просмотреть файл

@ -35,7 +35,7 @@
(((uint32_t) (a) * MASK) / (b))
#define ADD_UN16(x, y, t) \
((t) = x + y, \
((t) = (x) + (y), \
(uint64_t) (uint16_t) ((t) | (0 - ((t) >> G_SHIFT))))
#define DIV_ONE_UN16(x) \
@ -88,15 +88,15 @@
#define UN16x4_MUL_UN16(x, a) \
do \
{ \
uint64_t r1, r2, t; \
uint64_t r1__, r2__, t__; \
\
r1 = (x); \
UN16_rb_MUL_UN16 (r1, a, t); \
r1__ = (x); \
UN16_rb_MUL_UN16 (r1__, (a), t__); \
\
r2 = (x) >> G_SHIFT; \
UN16_rb_MUL_UN16 (r2, a, t); \
r2__ = (x) >> G_SHIFT; \
UN16_rb_MUL_UN16 (r2__, (a), t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -105,19 +105,19 @@
#define UN16x4_MUL_UN16_ADD_UN16x4(x, a, y) \
do \
{ \
uint64_t r1, r2, r3, t; \
uint64_t r1__, r2__, r3__, t__; \
\
r1 = (x); \
r2 = (y) & RB_MASK; \
UN16_rb_MUL_UN16 (r1, a, t); \
UN16_rb_ADD_UN16_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (y) & RB_MASK; \
UN16_rb_MUL_UN16 (r1__, (a), t__); \
UN16_rb_ADD_UN16_rb (r1__, r2__, t__); \
\
r2 = (x) >> G_SHIFT; \
r3 = ((y) >> G_SHIFT) & RB_MASK; \
UN16_rb_MUL_UN16 (r2, a, t); \
UN16_rb_ADD_UN16_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UN16_rb_MUL_UN16 (r2__, (a), t__); \
UN16_rb_ADD_UN16_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -126,21 +126,21 @@
#define UN16x4_MUL_UN16_ADD_UN16x4_MUL_UN16(x, a, y, b) \
do \
{ \
uint64_t r1, r2, r3, t; \
uint64_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = y; \
UN16_rb_MUL_UN16 (r1, a, t); \
UN16_rb_MUL_UN16 (r2, b, t); \
UN16_rb_ADD_UN16_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (y); \
UN16_rb_MUL_UN16 (r1__, (a), t__); \
UN16_rb_MUL_UN16 (r2__, (b), t__); \
UN16_rb_ADD_UN16_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT); \
r3 = (y >> G_SHIFT); \
UN16_rb_MUL_UN16 (r2, a, t); \
UN16_rb_MUL_UN16 (r3, b, t); \
UN16_rb_ADD_UN16_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT); \
r3__ = ((y) >> G_SHIFT); \
UN16_rb_MUL_UN16 (r2__, (a), t__); \
UN16_rb_MUL_UN16 (r3__, (b), t__); \
UN16_rb_ADD_UN16_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -149,17 +149,17 @@
#define UN16x4_MUL_UN16x4(x, a) \
do \
{ \
uint64_t r1, r2, r3, t; \
uint64_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UN16_rb_MUL_UN16_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UN16_rb_MUL_UN16_rb (r1__, r2__, t__); \
\
r2 = x >> G_SHIFT; \
r3 = a >> G_SHIFT; \
UN16_rb_MUL_UN16_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = (a) >> G_SHIFT; \
UN16_rb_MUL_UN16_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -168,21 +168,21 @@
#define UN16x4_MUL_UN16x4_ADD_UN16x4(x, a, y) \
do \
{ \
uint64_t r1, r2, r3, t; \
uint64_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UN16_rb_MUL_UN16_rb (r1, r2, t); \
r2 = y & RB_MASK; \
UN16_rb_ADD_UN16_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UN16_rb_MUL_UN16_rb (r1__, r2__, t__); \
r2__ = (y) & RB_MASK; \
UN16_rb_ADD_UN16_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT); \
r3 = (a >> G_SHIFT); \
UN16_rb_MUL_UN16_rb (r2, r3, t); \
r3 = (y >> G_SHIFT) & RB_MASK; \
UN16_rb_ADD_UN16_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT); \
r3__ = ((a) >> G_SHIFT); \
UN16_rb_MUL_UN16_rb (r2__, r3__, t__); \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UN16_rb_ADD_UN16_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
(x) = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
@ -191,40 +191,40 @@
#define UN16x4_MUL_UN16x4_ADD_UN16x4_MUL_UN16(x, a, y, b) \
do \
{ \
uint64_t r1, r2, r3, t; \
uint64_t r1__, r2__, r3__, t__; \
\
r1 = x; \
r2 = a; \
UN16_rb_MUL_UN16_rb (r1, r2, t); \
r2 = y; \
UN16_rb_MUL_UN16 (r2, b, t); \
UN16_rb_ADD_UN16_rb (r1, r2, t); \
r1__ = (x); \
r2__ = (a); \
UN16_rb_MUL_UN16_rb (r1__, r2__, t__); \
r2__ = (y); \
UN16_rb_MUL_UN16 (r2__, (b), t__); \
UN16_rb_ADD_UN16_rb (r1__, r2__, t__); \
\
r2 = x >> G_SHIFT; \
r3 = a >> G_SHIFT; \
UN16_rb_MUL_UN16_rb (r2, r3, t); \
r3 = y >> G_SHIFT; \
UN16_rb_MUL_UN16 (r3, b, t); \
UN16_rb_ADD_UN16_rb (r2, r3, t); \
r2__ = (x) >> G_SHIFT; \
r3__ = (a) >> G_SHIFT; \
UN16_rb_MUL_UN16_rb (r2__, r3__, t__); \
r3__ = (y) >> G_SHIFT; \
UN16_rb_MUL_UN16 (r3__, (b), t__); \
UN16_rb_ADD_UN16_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
x = r1__ | (r2__ << G_SHIFT); \
} while (0)
/*
x_c = min(x_c + y_c, 255)
*/
x_c = min(x_c + y_c, 255)
*/
#define UN16x4_ADD_UN16x4(x, y) \
do \
{ \
uint64_t r1, r2, r3, t; \
uint64_t r1__, r2__, r3__, t__; \
\
r1 = x & RB_MASK; \
r2 = y & RB_MASK; \
UN16_rb_ADD_UN16_rb (r1, r2, t); \
r1__ = (x) & RB_MASK; \
r2__ = (y) & RB_MASK; \
UN16_rb_ADD_UN16_rb (r1__, r2__, t__); \
\
r2 = (x >> G_SHIFT) & RB_MASK; \
r3 = (y >> G_SHIFT) & RB_MASK; \
UN16_rb_ADD_UN16_rb (r2, r3, t); \
r2__ = ((x) >> G_SHIFT) & RB_MASK; \
r3__ = ((y) >> G_SHIFT) & RB_MASK; \
UN16_rb_ADD_UN16_rb (r2__, r3__, t__); \
\
x = r1 | (r2 << G_SHIFT); \
x = r1__ | (r2__ << G_SHIFT); \
} while (0)

Просмотреть файл

@ -198,8 +198,7 @@
value = tls_ ## name ## _alloc (); \
} \
return value; \
} \
extern int no_such_variable
}
# define PIXMAN_GET_THREAD_LOCAL(name) \
tls_ ## name ## _get ()

Просмотреть файл

@ -50,16 +50,16 @@ coordinates_to_parameter (double x, double y, double angle)
*/
}
static void
conical_gradient_get_scanline_32 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask)
static uint32_t *
conical_get_scanline_narrow (pixman_iter_t *iter, const uint32_t *mask)
{
source_image_t *source = (source_image_t *)image;
gradient_t *gradient = (gradient_t *)source;
pixman_image_t *image = iter->image;
int x = iter->x;
int y = iter->y;
int width = iter->width;
uint32_t *buffer = iter->buffer;
gradient_t *gradient = (gradient_t *)image;
conical_gradient_t *conical = (conical_gradient_t *)image;
uint32_t *end = buffer + width;
pixman_gradient_walker_t walker;
@ -71,9 +71,9 @@ conical_gradient_get_scanline_32 (pixman_image_t *image,
double ry = y + 0.5;
double rz = 1.;
_pixman_gradient_walker_init (&walker, gradient, source->common.repeat);
_pixman_gradient_walker_init (&walker, gradient, image->common.repeat);
if (source->common.transform)
if (image->common.transform)
{
pixman_vector_t v;
@ -82,19 +82,19 @@ conical_gradient_get_scanline_32 (pixman_image_t *image,
v.vector[1] = pixman_int_to_fixed (y) + pixman_fixed_1 / 2;
v.vector[2] = pixman_fixed_1;
if (!pixman_transform_point_3d (source->common.transform, &v))
return;
if (!pixman_transform_point_3d (image->common.transform, &v))
return iter->buffer;
cx = source->common.transform->matrix[0][0] / 65536.;
cy = source->common.transform->matrix[1][0] / 65536.;
cz = source->common.transform->matrix[2][0] / 65536.;
cx = image->common.transform->matrix[0][0] / 65536.;
cy = image->common.transform->matrix[1][0] / 65536.;
cz = image->common.transform->matrix[2][0] / 65536.;
rx = v.vector[0] / 65536.;
ry = v.vector[1] / 65536.;
rz = v.vector[2] / 65536.;
affine =
source->common.transform->matrix[2][0] == 0 &&
image->common.transform->matrix[2][0] == 0 &&
v.vector[2] == pixman_fixed_1;
}
@ -155,13 +155,28 @@ conical_gradient_get_scanline_32 (pixman_image_t *image,
rz += cz;
}
}
iter->y++;
return iter->buffer;
}
static void
conical_gradient_property_changed (pixman_image_t *image)
static uint32_t *
conical_get_scanline_wide (pixman_iter_t *iter, const uint32_t *mask)
{
image->common.get_scanline_32 = conical_gradient_get_scanline_32;
image->common.get_scanline_64 = _pixman_image_get_scanline_generic_64;
uint32_t *buffer = conical_get_scanline_narrow (iter, NULL);
pixman_expand ((uint64_t *)buffer, buffer, PIXMAN_a8r8g8b8, iter->width);
return buffer;
}
void
_pixman_conical_gradient_iter_init (pixman_image_t *image, pixman_iter_t *iter)
{
if (iter->flags & ITER_NARROW)
iter->get_scanline = conical_get_scanline_narrow;
else
iter->get_scanline = conical_get_scanline_wide;
}
PIXMAN_EXPORT pixman_image_t *
@ -191,8 +206,6 @@ pixman_image_create_conical_gradient (pixman_point_fixed_t * center,
conical->center = *center;
conical->angle = (pixman_fixed_to_double (angle) / 180.0) * M_PI;
image->common.property_changed = conical_gradient_property_changed;
return image;
}

Просмотреть файл

@ -244,7 +244,7 @@ pixman_have_arm_neon (void)
#endif /* USE_ARM_NEON */
#else /* linux ELF */
#elif defined (__linux__) /* linux ELF */
#include <stdlib.h>
#include <unistd.h>
@ -270,7 +270,7 @@ static pixman_bool_t arm_tests_initialized = FALSE;
*/
static void
pixman_arm_read_auxv()
pixman_arm_detect_cpu_features (void)
{
char buf[1024];
char* pos;
@ -281,7 +281,7 @@ pixman_arm_read_auxv()
return;
}
fread(buf, sizeof(char), 1024, f);
fread(buf, sizeof(char), sizeof(buf), f);
fclose(f);
pos = strstr(buf, ver_token);
if (pos) {
@ -301,7 +301,7 @@ pixman_arm_read_auxv()
#else
static void
pixman_arm_read_auxv ()
pixman_arm_detect_cpu_features (void)
{
int fd;
Elf32_auxv_t aux;
@ -348,7 +348,7 @@ pixman_bool_t
pixman_have_arm_simd (void)
{
if (!arm_tests_initialized)
pixman_arm_read_auxv ();
pixman_arm_detect_cpu_features ();
return arm_has_v6;
}
@ -360,14 +360,19 @@ pixman_bool_t
pixman_have_arm_neon (void)
{
if (!arm_tests_initialized)
pixman_arm_read_auxv ();
pixman_arm_detect_cpu_features ();
return arm_has_neon;
}
#endif /* USE_ARM_NEON */
#endif /* linux */
#else /* linux ELF */
#define pixman_have_arm_simd() FALSE
#define pixman_have_arm_neon() FALSE
#endif
#endif /* USE_ARM_SIMD || USE_ARM_NEON */
@ -610,28 +615,36 @@ pixman_have_sse2 (void)
pixman_implementation_t *
_pixman_choose_implementation (void)
{
#ifdef USE_SSE2
if (pixman_have_sse2 ())
return _pixman_implementation_create_sse2 ();
#endif
pixman_implementation_t *imp;
imp = _pixman_implementation_create_general();
imp = _pixman_implementation_create_fast_path (imp);
#ifdef USE_MMX
if (pixman_have_mmx ())
return _pixman_implementation_create_mmx ();
imp = _pixman_implementation_create_mmx (imp);
#endif
#ifdef USE_SSE2
if (pixman_have_sse2 ())
imp = _pixman_implementation_create_sse2 (imp);
#endif
#ifdef USE_ARM_SIMD
if (pixman_have_arm_simd ())
imp = _pixman_implementation_create_arm_simd (imp);
#endif
#ifdef USE_ARM_NEON
if (pixman_have_arm_neon ())
return _pixman_implementation_create_arm_neon ();
#endif
#ifdef USE_ARM_SIMD
if (pixman_have_arm_simd ())
return _pixman_implementation_create_arm_simd ();
imp = _pixman_implementation_create_arm_neon (imp);
#endif
#ifdef USE_VMX
if (pixman_have_vmx ())
return _pixman_implementation_create_vmx ();
imp = _pixman_implementation_create_vmx (imp);
#endif
return _pixman_implementation_create_fast_path ();
return imp;
}

Просмотреть файл

@ -188,7 +188,7 @@ fast_composite_in_n_8_8 (pixman_implementation_t *imp,
int32_t w;
uint16_t t;
src = _pixman_image_get_solid (src_image, dest_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dest_image->bits.format);
srca = src >> 24;
@ -312,7 +312,7 @@ fast_composite_over_n_8_8888 (pixman_implementation_t *imp,
int dst_stride, mask_stride;
int32_t w;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -364,15 +364,14 @@ fast_composite_add_n_8888_8888_ca (pixman_implementation_t *imp,
int32_t width,
int32_t height)
{
uint32_t src, srca, s;
uint32_t src, s;
uint32_t *dst_line, *dst, d;
uint32_t *mask_line, *mask, ma;
int dst_stride, mask_stride;
int32_t w;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
return;
@ -427,7 +426,7 @@ fast_composite_over_n_8888_8888_ca (pixman_implementation_t *imp,
int dst_stride, mask_stride;
int32_t w;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -494,7 +493,7 @@ fast_composite_over_n_8_0888 (pixman_implementation_t *imp,
int dst_stride, mask_stride;
int32_t w;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -559,7 +558,7 @@ fast_composite_over_n_8_0565 (pixman_implementation_t *imp,
int dst_stride, mask_stride;
int32_t w;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -626,7 +625,7 @@ fast_composite_over_n_8888_0565_ca (pixman_implementation_t *imp,
int dst_stride, mask_stride;
int32_t w;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -1034,7 +1033,7 @@ fast_composite_add_n_8_8 (pixman_implementation_t *imp,
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, uint8_t, dst_stride, dst_line, 1);
PIXMAN_IMAGE_GET_LINE (mask_image, mask_x, mask_y, uint8_t, mask_stride, mask_line, 1);
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
sa = (src >> 24);
while (height--)
@ -1146,7 +1145,7 @@ fast_composite_over_n_1_8888 (pixman_implementation_t *imp,
if (width <= 0)
return;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
return;
@ -1240,7 +1239,7 @@ fast_composite_over_n_1_0565 (pixman_implementation_t *imp,
if (width <= 0)
return;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
return;
@ -1332,9 +1331,13 @@ fast_composite_solid_fill (pixman_implementation_t *imp,
{
uint32_t src;
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
if (dst_image->bits.format == PIXMAN_a8)
if (dst_image->bits.format == PIXMAN_a1)
{
src = src >> 31;
}
else if (dst_image->bits.format == PIXMAN_a8)
{
src = src >> 24;
}
@ -1387,32 +1390,33 @@ fast_composite_src_memcpy (pixman_implementation_t *imp,
}
}
FAST_NEAREST (8888_8888_cover, 8888, 8888, uint32_t, uint32_t, SRC, COVER);
FAST_NEAREST (8888_8888_none, 8888, 8888, uint32_t, uint32_t, SRC, NONE);
FAST_NEAREST (8888_8888_pad, 8888, 8888, uint32_t, uint32_t, SRC, PAD);
FAST_NEAREST (8888_8888_normal, 8888, 8888, uint32_t, uint32_t, SRC, NORMAL);
FAST_NEAREST (8888_8888_cover, 8888, 8888, uint32_t, uint32_t, OVER, COVER);
FAST_NEAREST (8888_8888_none, 8888, 8888, uint32_t, uint32_t, OVER, NONE);
FAST_NEAREST (8888_8888_pad, 8888, 8888, uint32_t, uint32_t, OVER, PAD);
FAST_NEAREST (8888_8888_normal, 8888, 8888, uint32_t, uint32_t, OVER, NORMAL);
FAST_NEAREST (8888_565_cover, 8888, 0565, uint32_t, uint16_t, SRC, COVER);
FAST_NEAREST (8888_565_none, 8888, 0565, uint32_t, uint16_t, SRC, NONE);
FAST_NEAREST (8888_565_pad, 8888, 0565, uint32_t, uint16_t, SRC, PAD);
FAST_NEAREST (8888_565_normal, 8888, 0565, uint32_t, uint16_t, SRC, NORMAL);
FAST_NEAREST (565_565_normal, 0565, 0565, uint16_t, uint16_t, SRC, NORMAL);
FAST_NEAREST (8888_565_cover, 8888, 0565, uint32_t, uint16_t, OVER, COVER);
FAST_NEAREST (8888_565_none, 8888, 0565, uint32_t, uint16_t, OVER, NONE);
FAST_NEAREST (8888_565_pad, 8888, 0565, uint32_t, uint16_t, OVER, PAD);
FAST_NEAREST (8888_565_normal, 8888, 0565, uint32_t, uint16_t, OVER, NORMAL);
FAST_NEAREST (8888_8888_cover, 8888, 8888, uint32_t, uint32_t, SRC, COVER)
FAST_NEAREST (8888_8888_none, 8888, 8888, uint32_t, uint32_t, SRC, NONE)
FAST_NEAREST (8888_8888_pad, 8888, 8888, uint32_t, uint32_t, SRC, PAD)
FAST_NEAREST (8888_8888_normal, 8888, 8888, uint32_t, uint32_t, SRC, NORMAL)
FAST_NEAREST (8888_8888_cover, 8888, 8888, uint32_t, uint32_t, OVER, COVER)
FAST_NEAREST (8888_8888_none, 8888, 8888, uint32_t, uint32_t, OVER, NONE)
FAST_NEAREST (8888_8888_pad, 8888, 8888, uint32_t, uint32_t, OVER, PAD)
FAST_NEAREST (8888_8888_normal, 8888, 8888, uint32_t, uint32_t, OVER, NORMAL)
FAST_NEAREST (8888_565_cover, 8888, 0565, uint32_t, uint16_t, SRC, COVER)
FAST_NEAREST (8888_565_none, 8888, 0565, uint32_t, uint16_t, SRC, NONE)
FAST_NEAREST (8888_565_pad, 8888, 0565, uint32_t, uint16_t, SRC, PAD)
FAST_NEAREST (8888_565_normal, 8888, 0565, uint32_t, uint16_t, SRC, NORMAL)
FAST_NEAREST (565_565_normal, 0565, 0565, uint16_t, uint16_t, SRC, NORMAL)
FAST_NEAREST (8888_565_cover, 8888, 0565, uint32_t, uint16_t, OVER, COVER)
FAST_NEAREST (8888_565_none, 8888, 0565, uint32_t, uint16_t, OVER, NONE)
FAST_NEAREST (8888_565_pad, 8888, 0565, uint32_t, uint16_t, OVER, PAD)
FAST_NEAREST (8888_565_normal, 8888, 0565, uint32_t, uint16_t, OVER, NORMAL)
/* Use more unrolling for src_0565_0565 because it is typically CPU bound */
static force_inline void
scaled_nearest_scanline_565_565_SRC (uint16_t * dst,
uint16_t * src,
int32_t w,
pixman_fixed_t vx,
pixman_fixed_t unit_x,
pixman_fixed_t max_vx)
scaled_nearest_scanline_565_565_SRC (uint16_t * dst,
const uint16_t * src,
int32_t w,
pixman_fixed_t vx,
pixman_fixed_t unit_x,
pixman_fixed_t max_vx,
pixman_bool_t fully_transparent_src)
{
uint16_t tmp1, tmp2, tmp3, tmp4;
while ((w -= 4) >= 0)
@ -1445,13 +1449,13 @@ scaled_nearest_scanline_565_565_SRC (uint16_t * dst,
FAST_NEAREST_MAINLOOP (565_565_cover_SRC,
scaled_nearest_scanline_565_565_SRC,
uint16_t, uint16_t, COVER);
uint16_t, uint16_t, COVER)
FAST_NEAREST_MAINLOOP (565_565_none_SRC,
scaled_nearest_scanline_565_565_SRC,
uint16_t, uint16_t, NONE);
uint16_t, uint16_t, NONE)
FAST_NEAREST_MAINLOOP (565_565_pad_SRC,
scaled_nearest_scanline_565_565_SRC,
uint16_t, uint16_t, PAD);
uint16_t, uint16_t, PAD)
static force_inline uint32_t
fetch_nearest (pixman_repeat_t src_repeat,
@ -1613,6 +1617,272 @@ fast_composite_scaled_nearest (pixman_implementation_t *imp,
}
}
#define CACHE_LINE_SIZE 64
#define FAST_SIMPLE_ROTATE(suffix, pix_type) \
\
static void \
blt_rotated_90_trivial_##suffix (pix_type *dst, \
int dst_stride, \
const pix_type *src, \
int src_stride, \
int w, \
int h) \
{ \
int x, y; \
for (y = 0; y < h; y++) \
{ \
const pix_type *s = src + (h - y - 1); \
pix_type *d = dst + dst_stride * y; \
for (x = 0; x < w; x++) \
{ \
*d++ = *s; \
s += src_stride; \
} \
} \
} \
\
static void \
blt_rotated_270_trivial_##suffix (pix_type *dst, \
int dst_stride, \
const pix_type *src, \
int src_stride, \
int w, \
int h) \
{ \
int x, y; \
for (y = 0; y < h; y++) \
{ \
const pix_type *s = src + src_stride * (w - 1) + y; \
pix_type *d = dst + dst_stride * y; \
for (x = 0; x < w; x++) \
{ \
*d++ = *s; \
s -= src_stride; \
} \
} \
} \
\
static void \
blt_rotated_90_##suffix (pix_type *dst, \
int dst_stride, \
const pix_type *src, \
int src_stride, \
int W, \
int H) \
{ \
int x; \
int leading_pixels = 0, trailing_pixels = 0; \
const int TILE_SIZE = CACHE_LINE_SIZE / sizeof(pix_type); \
\
/* \
* split processing into handling destination as TILE_SIZExH cache line \
* aligned vertical stripes (optimistically assuming that destination \
* stride is a multiple of cache line, if not - it will be just a bit \
* slower) \
*/ \
\
if ((uintptr_t)dst & (CACHE_LINE_SIZE - 1)) \
{ \
leading_pixels = TILE_SIZE - (((uintptr_t)dst & \
(CACHE_LINE_SIZE - 1)) / sizeof(pix_type)); \
if (leading_pixels > W) \
leading_pixels = W; \
\
/* unaligned leading part NxH (where N < TILE_SIZE) */ \
blt_rotated_90_trivial_##suffix ( \
dst, \
dst_stride, \
src, \
src_stride, \
leading_pixels, \
H); \
\
dst += leading_pixels; \
src += leading_pixels * src_stride; \
W -= leading_pixels; \
} \
\
if ((uintptr_t)(dst + W) & (CACHE_LINE_SIZE - 1)) \
{ \
trailing_pixels = (((uintptr_t)(dst + W) & \
(CACHE_LINE_SIZE - 1)) / sizeof(pix_type)); \
if (trailing_pixels > W) \
trailing_pixels = W; \
W -= trailing_pixels; \
} \
\
for (x = 0; x < W; x += TILE_SIZE) \
{ \
/* aligned middle part TILE_SIZExH */ \
blt_rotated_90_trivial_##suffix ( \
dst + x, \
dst_stride, \
src + src_stride * x, \
src_stride, \
TILE_SIZE, \
H); \
} \
\
if (trailing_pixels) \
{ \
/* unaligned trailing part NxH (where N < TILE_SIZE) */ \
blt_rotated_90_trivial_##suffix ( \
dst + W, \
dst_stride, \
src + W * src_stride, \
src_stride, \
trailing_pixels, \
H); \
} \
} \
\
static void \
blt_rotated_270_##suffix (pix_type *dst, \
int dst_stride, \
const pix_type *src, \
int src_stride, \
int W, \
int H) \
{ \
int x; \
int leading_pixels = 0, trailing_pixels = 0; \
const int TILE_SIZE = CACHE_LINE_SIZE / sizeof(pix_type); \
\
/* \
* split processing into handling destination as TILE_SIZExH cache line \
* aligned vertical stripes (optimistically assuming that destination \
* stride is a multiple of cache line, if not - it will be just a bit \
* slower) \
*/ \
\
if ((uintptr_t)dst & (CACHE_LINE_SIZE - 1)) \
{ \
leading_pixels = TILE_SIZE - (((uintptr_t)dst & \
(CACHE_LINE_SIZE - 1)) / sizeof(pix_type)); \
if (leading_pixels > W) \
leading_pixels = W; \
\
/* unaligned leading part NxH (where N < TILE_SIZE) */ \
blt_rotated_270_trivial_##suffix ( \
dst, \
dst_stride, \
src + src_stride * (W - leading_pixels), \
src_stride, \
leading_pixels, \
H); \
\
dst += leading_pixels; \
W -= leading_pixels; \
} \
\
if ((uintptr_t)(dst + W) & (CACHE_LINE_SIZE - 1)) \
{ \
trailing_pixels = (((uintptr_t)(dst + W) & \
(CACHE_LINE_SIZE - 1)) / sizeof(pix_type)); \
if (trailing_pixels > W) \
trailing_pixels = W; \
W -= trailing_pixels; \
src += trailing_pixels * src_stride; \
} \
\
for (x = 0; x < W; x += TILE_SIZE) \
{ \
/* aligned middle part TILE_SIZExH */ \
blt_rotated_270_trivial_##suffix ( \
dst + x, \
dst_stride, \
src + src_stride * (W - x - TILE_SIZE), \
src_stride, \
TILE_SIZE, \
H); \
} \
\
if (trailing_pixels) \
{ \
/* unaligned trailing part NxH (where N < TILE_SIZE) */ \
blt_rotated_270_trivial_##suffix ( \
dst + W, \
dst_stride, \
src - trailing_pixels * src_stride, \
src_stride, \
trailing_pixels, \
H); \
} \
} \
\
static void \
fast_composite_rotate_90_##suffix (pixman_implementation_t *imp, \
pixman_op_t op, \
pixman_image_t * src_image, \
pixman_image_t * mask_image, \
pixman_image_t * dst_image, \
int32_t src_x, \
int32_t src_y, \
int32_t mask_x, \
int32_t mask_y, \
int32_t dest_x, \
int32_t dest_y, \
int32_t width, \
int32_t height) \
{ \
pix_type *dst_line; \
pix_type *src_line; \
int dst_stride, src_stride; \
int src_x_t, src_y_t; \
\
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, pix_type, \
dst_stride, dst_line, 1); \
src_x_t = -src_y + pixman_fixed_to_int ( \
src_image->common.transform->matrix[0][2] + \
pixman_fixed_1 / 2 - pixman_fixed_e) - height;\
src_y_t = src_x + pixman_fixed_to_int ( \
src_image->common.transform->matrix[1][2] + \
pixman_fixed_1 / 2 - pixman_fixed_e); \
PIXMAN_IMAGE_GET_LINE (src_image, src_x_t, src_y_t, pix_type, \
src_stride, src_line, 1); \
blt_rotated_90_##suffix (dst_line, dst_stride, src_line, src_stride, \
width, height); \
} \
\
static void \
fast_composite_rotate_270_##suffix (pixman_implementation_t *imp, \
pixman_op_t op, \
pixman_image_t * src_image, \
pixman_image_t * mask_image, \
pixman_image_t * dst_image, \
int32_t src_x, \
int32_t src_y, \
int32_t mask_x, \
int32_t mask_y, \
int32_t dest_x, \
int32_t dest_y, \
int32_t width, \
int32_t height) \
{ \
pix_type *dst_line; \
pix_type *src_line; \
int dst_stride, src_stride; \
int src_x_t, src_y_t; \
\
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, pix_type, \
dst_stride, dst_line, 1); \
src_x_t = src_y + pixman_fixed_to_int ( \
src_image->common.transform->matrix[0][2] + \
pixman_fixed_1 / 2 - pixman_fixed_e); \
src_y_t = -src_x + pixman_fixed_to_int ( \
src_image->common.transform->matrix[1][2] + \
pixman_fixed_1 / 2 - pixman_fixed_e) - width; \
PIXMAN_IMAGE_GET_LINE (src_image, src_x_t, src_y_t, pix_type, \
src_stride, src_line, 1); \
blt_rotated_270_##suffix (dst_line, dst_stride, src_line, src_stride, \
width, height); \
}
FAST_SIMPLE_ROTATE (8, uint8_t)
FAST_SIMPLE_ROTATE (565, uint16_t)
FAST_SIMPLE_ROTATE (8888, uint32_t)
static const pixman_fast_path_t c_fast_paths[] =
{
PIXMAN_STD_FAST_PATH (OVER, solid, a8, r5g6b5, fast_composite_over_n_8_0565),
@ -1655,6 +1925,7 @@ static const pixman_fast_path_t c_fast_paths[] =
PIXMAN_STD_FAST_PATH (SRC, solid, null, x8r8g8b8, fast_composite_solid_fill),
PIXMAN_STD_FAST_PATH (SRC, solid, null, a8b8g8r8, fast_composite_solid_fill),
PIXMAN_STD_FAST_PATH (SRC, solid, null, x8b8g8r8, fast_composite_solid_fill),
PIXMAN_STD_FAST_PATH (SRC, solid, null, a1, fast_composite_solid_fill),
PIXMAN_STD_FAST_PATH (SRC, solid, null, a8, fast_composite_solid_fill),
PIXMAN_STD_FAST_PATH (SRC, solid, null, r5g6b5, fast_composite_solid_fill),
PIXMAN_STD_FAST_PATH (SRC, x8r8g8b8, null, a8r8g8b8, fast_composite_src_x888_8888),
@ -1730,9 +2001,111 @@ static const pixman_fast_path_t c_fast_paths[] =
NEAREST_FAST_PATH (OVER, x8b8g8r8, a8b8g8r8),
NEAREST_FAST_PATH (OVER, a8b8g8r8, a8b8g8r8),
#define SIMPLE_ROTATE_FLAGS(angle) \
(FAST_PATH_ROTATE_ ## angle ## _TRANSFORM | \
FAST_PATH_NEAREST_FILTER | \
FAST_PATH_SAMPLES_COVER_CLIP | \
FAST_PATH_STANDARD_FLAGS)
#define SIMPLE_ROTATE_FAST_PATH(op,s,d,suffix) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, SIMPLE_ROTATE_FLAGS (90), \
PIXMAN_null, 0, \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_rotate_90_##suffix, \
}, \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, SIMPLE_ROTATE_FLAGS (270), \
PIXMAN_null, 0, \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_rotate_270_##suffix, \
}
SIMPLE_ROTATE_FAST_PATH (SRC, a8r8g8b8, a8r8g8b8, 8888),
SIMPLE_ROTATE_FAST_PATH (SRC, a8r8g8b8, x8r8g8b8, 8888),
SIMPLE_ROTATE_FAST_PATH (SRC, x8r8g8b8, x8r8g8b8, 8888),
SIMPLE_ROTATE_FAST_PATH (SRC, r5g6b5, r5g6b5, 565),
SIMPLE_ROTATE_FAST_PATH (SRC, a8, a8, 8),
{ PIXMAN_OP_NONE },
};
#ifdef WORDS_BIGENDIAN
#define A1_FILL_MASK(n, offs) (((1 << (n)) - 1) << (32 - (offs) - (n)))
#else
#define A1_FILL_MASK(n, offs) (((1 << (n)) - 1) << (offs))
#endif
static force_inline void
pixman_fill1_line (uint32_t *dst, int offs, int width, int v)
{
if (offs)
{
int leading_pixels = 32 - offs;
if (leading_pixels >= width)
{
if (v)
*dst |= A1_FILL_MASK (width, offs);
else
*dst &= ~A1_FILL_MASK (width, offs);
return;
}
else
{
if (v)
*dst++ |= A1_FILL_MASK (leading_pixels, offs);
else
*dst++ &= ~A1_FILL_MASK (leading_pixels, offs);
width -= leading_pixels;
}
}
while (width >= 32)
{
if (v)
*dst++ = 0xFFFFFFFF;
else
*dst++ = 0;
width -= 32;
}
if (width > 0)
{
if (v)
*dst |= A1_FILL_MASK (width, 0);
else
*dst &= ~A1_FILL_MASK (width, 0);
}
}
static void
pixman_fill1 (uint32_t *bits,
int stride,
int x,
int y,
int width,
int height,
uint32_t xor)
{
uint32_t *dst = bits + y * stride + (x >> 5);
int offs = x & 31;
if (xor & 1)
{
while (height--)
{
pixman_fill1_line (dst, offs, width, 1);
dst += stride;
}
}
else
{
while (height--)
{
pixman_fill1_line (dst, offs, width, 0);
dst += stride;
}
}
}
static void
pixman_fill8 (uint32_t *bits,
int stride,
@ -1819,6 +2192,10 @@ fast_path_fill (pixman_implementation_t *imp,
{
switch (bpp)
{
case 1:
pixman_fill1 (bits, stride, x, y, width, height, xor);
break;
case 8:
pixman_fill8 (bits, stride, x, y, width, height, xor);
break;
@ -1841,10 +2218,9 @@ fast_path_fill (pixman_implementation_t *imp,
}
pixman_implementation_t *
_pixman_implementation_create_fast_path (void)
_pixman_implementation_create_fast_path (pixman_implementation_t *fallback)
{
pixman_implementation_t *general = _pixman_implementation_create_general ();
pixman_implementation_t *imp = _pixman_implementation_create (general, c_fast_paths);
pixman_implementation_t *imp = _pixman_implementation_create (fallback, c_fast_paths);
imp->fill = fast_path_fill;

Просмотреть файл

@ -138,18 +138,22 @@ pad_repeat_get_scanline_bounds (int32_t source_image_width,
#define FAST_NEAREST_SCANLINE(scanline_func_name, SRC_FORMAT, DST_FORMAT, \
src_type_t, dst_type_t, OP, repeat_mode) \
static force_inline void \
scanline_func_name (dst_type_t *dst, \
src_type_t *src, \
int32_t w, \
pixman_fixed_t vx, \
pixman_fixed_t unit_x, \
pixman_fixed_t max_vx) \
scanline_func_name (dst_type_t *dst, \
const src_type_t *src, \
int32_t w, \
pixman_fixed_t vx, \
pixman_fixed_t unit_x, \
pixman_fixed_t max_vx, \
pixman_bool_t fully_transparent_src) \
{ \
uint32_t d; \
src_type_t s1, s2; \
uint8_t a1, a2; \
int x1, x2; \
\
if (PIXMAN_OP_ ## OP == PIXMAN_OP_OVER && fully_transparent_src) \
return; \
\
if (PIXMAN_OP_ ## OP != PIXMAN_OP_SRC && PIXMAN_OP_ ## OP != PIXMAN_OP_OVER) \
abort(); \
\
@ -245,10 +249,10 @@ scanline_func_name (dst_type_t *dst, \
} \
}
#define FAST_NEAREST_MAINLOOP(scale_func_name, scanline_func, src_type_t, dst_type_t, \
repeat_mode) \
#define FAST_NEAREST_MAINLOOP_INT(scale_func_name, scanline_func, src_type_t, mask_type_t, \
dst_type_t, repeat_mode, have_mask, mask_is_solid) \
static void \
fast_composite_scaled_nearest_ ## scale_func_name (pixman_implementation_t *imp, \
fast_composite_scaled_nearest ## scale_func_name (pixman_implementation_t *imp, \
pixman_op_t op, \
pixman_image_t * src_image, \
pixman_image_t * mask_image, \
@ -263,9 +267,10 @@ fast_composite_scaled_nearest_ ## scale_func_name (pixman_implementation_t *imp,
int32_t height) \
{ \
dst_type_t *dst_line; \
mask_type_t *mask_line; \
src_type_t *src_first_line; \
int y; \
pixman_fixed_t max_vx = max_vx; /* suppress uninitialized variable warning */ \
pixman_fixed_t max_vx = INT32_MAX; /* suppress uninitialized variable warning */ \
pixman_fixed_t max_vy; \
pixman_vector_t v; \
pixman_fixed_t vx, vy; \
@ -274,9 +279,19 @@ fast_composite_scaled_nearest_ ## scale_func_name (pixman_implementation_t *imp,
\
src_type_t *src; \
dst_type_t *dst; \
int src_stride, dst_stride; \
mask_type_t solid_mask; \
const mask_type_t *mask = &solid_mask; \
int src_stride, mask_stride, dst_stride; \
\
PIXMAN_IMAGE_GET_LINE (dst_image, dst_x, dst_y, dst_type_t, dst_stride, dst_line, 1); \
if (have_mask) \
{ \
if (mask_is_solid) \
solid_mask = _pixman_image_get_solid (imp, mask_image, dst_image->bits.format); \
else \
PIXMAN_IMAGE_GET_LINE (mask_image, mask_x, mask_y, mask_type_t, \
mask_stride, mask_line, 1); \
} \
/* pass in 0 instead of src_x and src_y because src_x and src_y need to be \
* transformed from destination space to source space */ \
PIXMAN_IMAGE_GET_LINE (src_image, 0, 0, src_type_t, src_stride, src_first_line, 1); \
@ -321,6 +336,11 @@ fast_composite_scaled_nearest_ ## scale_func_name (pixman_implementation_t *imp,
{ \
dst = dst_line; \
dst_line += dst_stride; \
if (have_mask && !mask_is_solid) \
{ \
mask = mask_line; \
mask_line += mask_stride; \
} \
\
y = vy >> 16; \
vy += unit_y; \
@ -332,58 +352,89 @@ fast_composite_scaled_nearest_ ## scale_func_name (pixman_implementation_t *imp,
src = src_first_line + src_stride * y; \
if (left_pad > 0) \
{ \
scanline_func (dst, src, left_pad, 0, 0, 0); \
scanline_func (mask, dst, src, left_pad, 0, 0, 0, FALSE); \
} \
if (width > 0) \
{ \
scanline_func (dst + left_pad, src, width, vx, unit_x, 0); \
scanline_func (mask + (mask_is_solid ? 0 : left_pad), \
dst + left_pad, src, width, vx, unit_x, 0, FALSE); \
} \
if (right_pad > 0) \
{ \
scanline_func (dst + left_pad + width, src + src_image->bits.width - 1, \
right_pad, 0, 0, 0); \
scanline_func (mask + (mask_is_solid ? 0 : left_pad + width), \
dst + left_pad + width, src + src_image->bits.width - 1, \
right_pad, 0, 0, 0, FALSE); \
} \
} \
else if (PIXMAN_REPEAT_ ## repeat_mode == PIXMAN_REPEAT_NONE) \
{ \
static src_type_t zero = 0; \
static const src_type_t zero[1] = { 0 }; \
if (y < 0 || y >= src_image->bits.height) \
{ \
scanline_func (dst, &zero, left_pad + width + right_pad, 0, 0, 0); \
scanline_func (mask, dst, zero, left_pad + width + right_pad, 0, 0, 0, TRUE); \
continue; \
} \
src = src_first_line + src_stride * y; \
if (left_pad > 0) \
{ \
scanline_func (dst, &zero, left_pad, 0, 0, 0); \
scanline_func (mask, dst, zero, left_pad, 0, 0, 0, TRUE); \
} \
if (width > 0) \
{ \
scanline_func (dst + left_pad, src, width, vx, unit_x, 0); \
scanline_func (mask + (mask_is_solid ? 0 : left_pad), \
dst + left_pad, src, width, vx, unit_x, 0, FALSE); \
} \
if (right_pad > 0) \
{ \
scanline_func (dst + left_pad + width, &zero, right_pad, 0, 0, 0); \
scanline_func (mask + (mask_is_solid ? 0 : left_pad + width), \
dst + left_pad + width, zero, right_pad, 0, 0, 0, TRUE); \
} \
} \
else \
{ \
src = src_first_line + src_stride * y; \
scanline_func (dst, src, width, vx, unit_x, max_vx); \
scanline_func (mask, dst, src, width, vx, unit_x, max_vx, FALSE); \
} \
} \
}
/* A workaround for old sun studio, see: https://bugs.freedesktop.org/show_bug.cgi?id=32764 */
#define FAST_NEAREST_MAINLOOP_COMMON(scale_func_name, scanline_func, src_type_t, mask_type_t, \
dst_type_t, repeat_mode, have_mask, mask_is_solid) \
FAST_NEAREST_MAINLOOP_INT(_ ## scale_func_name, scanline_func, src_type_t, mask_type_t, \
dst_type_t, repeat_mode, have_mask, mask_is_solid)
#define FAST_NEAREST_MAINLOOP_NOMASK(scale_func_name, scanline_func, src_type_t, dst_type_t, \
repeat_mode) \
static force_inline void \
scanline_func##scale_func_name##_wrapper ( \
const uint8_t *mask, \
dst_type_t *dst, \
const src_type_t *src, \
int32_t w, \
pixman_fixed_t vx, \
pixman_fixed_t unit_x, \
pixman_fixed_t max_vx, \
pixman_bool_t fully_transparent_src) \
{ \
scanline_func (dst, src, w, vx, unit_x, max_vx, fully_transparent_src); \
} \
FAST_NEAREST_MAINLOOP_INT (scale_func_name, scanline_func##scale_func_name##_wrapper, \
src_type_t, uint8_t, dst_type_t, repeat_mode, FALSE, FALSE)
#define FAST_NEAREST_MAINLOOP(scale_func_name, scanline_func, src_type_t, dst_type_t, \
repeat_mode) \
FAST_NEAREST_MAINLOOP_NOMASK(_ ## scale_func_name, scanline_func, src_type_t, \
dst_type_t, repeat_mode)
#define FAST_NEAREST(scale_func_name, SRC_FORMAT, DST_FORMAT, \
src_type_t, dst_type_t, OP, repeat_mode) \
FAST_NEAREST_SCANLINE(scaled_nearest_scanline_ ## scale_func_name ## _ ## OP, \
SRC_FORMAT, DST_FORMAT, src_type_t, dst_type_t, \
OP, repeat_mode) \
FAST_NEAREST_MAINLOOP(scale_func_name##_##OP, \
FAST_NEAREST_MAINLOOP_NOMASK(_ ## scale_func_name ## _ ## OP, \
scaled_nearest_scanline_ ## scale_func_name ## _ ## OP, \
src_type_t, dst_type_t, repeat_mode) \
\
extern int no_such_variable
src_type_t, dst_type_t, repeat_mode)
#define SCALED_NEAREST_FLAGS \
@ -435,6 +486,90 @@ fast_composite_scaled_nearest_ ## scale_func_name (pixman_implementation_t *imp,
fast_composite_scaled_nearest_ ## func ## _cover ## _ ## op, \
}
#define SIMPLE_NEAREST_A8_MASK_FAST_PATH_NORMAL(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_NEAREST_FLAGS | \
FAST_PATH_NORMAL_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_a8, MASK_FLAGS (a8, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_nearest_ ## func ## _normal ## _ ## op, \
}
#define SIMPLE_NEAREST_A8_MASK_FAST_PATH_PAD(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_NEAREST_FLAGS | \
FAST_PATH_PAD_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_a8, MASK_FLAGS (a8, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_nearest_ ## func ## _pad ## _ ## op, \
}
#define SIMPLE_NEAREST_A8_MASK_FAST_PATH_NONE(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_NEAREST_FLAGS | \
FAST_PATH_NONE_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_a8, MASK_FLAGS (a8, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_nearest_ ## func ## _none ## _ ## op, \
}
#define SIMPLE_NEAREST_A8_MASK_FAST_PATH_COVER(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
SCALED_NEAREST_FLAGS | FAST_PATH_SAMPLES_COVER_CLIP, \
PIXMAN_a8, MASK_FLAGS (a8, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_nearest_ ## func ## _cover ## _ ## op, \
}
#define SIMPLE_NEAREST_SOLID_MASK_FAST_PATH_NORMAL(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_NEAREST_FLAGS | \
FAST_PATH_NORMAL_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_solid, MASK_FLAGS (solid, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_nearest_ ## func ## _normal ## _ ## op, \
}
#define SIMPLE_NEAREST_SOLID_MASK_FAST_PATH_PAD(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_NEAREST_FLAGS | \
FAST_PATH_PAD_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_solid, MASK_FLAGS (solid, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_nearest_ ## func ## _pad ## _ ## op, \
}
#define SIMPLE_NEAREST_SOLID_MASK_FAST_PATH_NONE(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_NEAREST_FLAGS | \
FAST_PATH_NONE_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_solid, MASK_FLAGS (solid, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_nearest_ ## func ## _none ## _ ## op, \
}
#define SIMPLE_NEAREST_SOLID_MASK_FAST_PATH_COVER(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
SCALED_NEAREST_FLAGS | FAST_PATH_SAMPLES_COVER_CLIP, \
PIXMAN_solid, MASK_FLAGS (solid, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_nearest_ ## func ## _cover ## _ ## op, \
}
/* Prefer the use of 'cover' variant, because it is faster */
#define SIMPLE_NEAREST_FAST_PATH(op,s,d,func) \
SIMPLE_NEAREST_FAST_PATH_COVER (op,s,d,func), \
@ -442,4 +577,446 @@ fast_composite_scaled_nearest_ ## scale_func_name (pixman_implementation_t *imp,
SIMPLE_NEAREST_FAST_PATH_PAD (op,s,d,func), \
SIMPLE_NEAREST_FAST_PATH_NORMAL (op,s,d,func)
#define SIMPLE_NEAREST_A8_MASK_FAST_PATH(op,s,d,func) \
SIMPLE_NEAREST_A8_MASK_FAST_PATH_COVER (op,s,d,func), \
SIMPLE_NEAREST_A8_MASK_FAST_PATH_NONE (op,s,d,func), \
SIMPLE_NEAREST_A8_MASK_FAST_PATH_PAD (op,s,d,func)
#define SIMPLE_NEAREST_SOLID_MASK_FAST_PATH(op,s,d,func) \
SIMPLE_NEAREST_SOLID_MASK_FAST_PATH_COVER (op,s,d,func), \
SIMPLE_NEAREST_SOLID_MASK_FAST_PATH_NONE (op,s,d,func), \
SIMPLE_NEAREST_SOLID_MASK_FAST_PATH_PAD (op,s,d,func)
/*****************************************************************************/
/*
* Identify 5 zones in each scanline for bilinear scaling. Depending on
* whether 2 pixels to be interpolated are fetched from the image itself,
* from the padding area around it or from both image and padding area.
*/
static force_inline void
bilinear_pad_repeat_get_scanline_bounds (int32_t source_image_width,
pixman_fixed_t vx,
pixman_fixed_t unit_x,
int32_t * left_pad,
int32_t * left_tz,
int32_t * width,
int32_t * right_tz,
int32_t * right_pad)
{
int width1 = *width, left_pad1, right_pad1;
int width2 = *width, left_pad2, right_pad2;
pad_repeat_get_scanline_bounds (source_image_width, vx, unit_x,
&width1, &left_pad1, &right_pad1);
pad_repeat_get_scanline_bounds (source_image_width, vx + pixman_fixed_1,
unit_x, &width2, &left_pad2, &right_pad2);
*left_pad = left_pad2;
*left_tz = left_pad1 - left_pad2;
*right_tz = right_pad2 - right_pad1;
*right_pad = right_pad1;
*width -= *left_pad + *left_tz + *right_tz + *right_pad;
}
/*
* Main loop template for single pass bilinear scaling. It needs to be
* provided with 'scanline_func' which should do the compositing operation.
* The needed function has the following prototype:
*
* scanline_func (dst_type_t * dst,
* const mask_type_ * mask,
* const src_type_t * src_top,
* const src_type_t * src_bottom,
* int32_t width,
* int weight_top,
* int weight_bottom,
* pixman_fixed_t vx,
* pixman_fixed_t unit_x,
* pixman_fixed_t max_vx,
* pixman_bool_t zero_src)
*
* Where:
* dst - destination scanline buffer for storing results
* mask - mask buffer (or single value for solid mask)
* src_top, src_bottom - two source scanlines
* width - number of pixels to process
* weight_top - weight of the top row for interpolation
* weight_bottom - weight of the bottom row for interpolation
* vx - initial position for fetching the first pair of
* pixels from the source buffer
* unit_x - position increment needed to move to the next pair
* of pixels
* max_vx - image size as a fixed point value, can be used for
* implementing NORMAL repeat (when it is supported)
* zero_src - boolean hint variable, which is set to TRUE when
* all source pixels are fetched from zero padding
* zone for NONE repeat
*
* Note: normally the sum of 'weight_top' and 'weight_bottom' is equal to 256,
* but sometimes it may be less than that for NONE repeat when handling
* fuzzy antialiased top or bottom image edges. Also both top and
* bottom weight variables are guaranteed to have value in 0-255
* range and can fit into unsigned byte or be used with 8-bit SIMD
* multiplication instructions.
*/
#define FAST_BILINEAR_MAINLOOP_INT(scale_func_name, scanline_func, src_type_t, mask_type_t, \
dst_type_t, repeat_mode, have_mask, mask_is_solid) \
static void \
fast_composite_scaled_bilinear ## scale_func_name (pixman_implementation_t *imp, \
pixman_op_t op, \
pixman_image_t * src_image, \
pixman_image_t * mask_image, \
pixman_image_t * dst_image, \
int32_t src_x, \
int32_t src_y, \
int32_t mask_x, \
int32_t mask_y, \
int32_t dst_x, \
int32_t dst_y, \
int32_t width, \
int32_t height) \
{ \
dst_type_t *dst_line; \
mask_type_t *mask_line; \
src_type_t *src_first_line; \
int y1, y2; \
pixman_fixed_t max_vx = INT32_MAX; /* suppress uninitialized variable warning */ \
pixman_vector_t v; \
pixman_fixed_t vx, vy; \
pixman_fixed_t unit_x, unit_y; \
int32_t left_pad, left_tz, right_tz, right_pad; \
\
dst_type_t *dst; \
mask_type_t solid_mask; \
const mask_type_t *mask = &solid_mask; \
int src_stride, mask_stride, dst_stride; \
\
PIXMAN_IMAGE_GET_LINE (dst_image, dst_x, dst_y, dst_type_t, dst_stride, dst_line, 1); \
if (have_mask) \
{ \
if (mask_is_solid) \
{ \
solid_mask = _pixman_image_get_solid (imp, mask_image, dst_image->bits.format); \
mask_stride = 0; \
} \
else \
{ \
PIXMAN_IMAGE_GET_LINE (mask_image, mask_x, mask_y, mask_type_t, \
mask_stride, mask_line, 1); \
} \
} \
/* pass in 0 instead of src_x and src_y because src_x and src_y need to be \
* transformed from destination space to source space */ \
PIXMAN_IMAGE_GET_LINE (src_image, 0, 0, src_type_t, src_stride, src_first_line, 1); \
\
/* reference point is the center of the pixel */ \
v.vector[0] = pixman_int_to_fixed (src_x) + pixman_fixed_1 / 2; \
v.vector[1] = pixman_int_to_fixed (src_y) + pixman_fixed_1 / 2; \
v.vector[2] = pixman_fixed_1; \
\
if (!pixman_transform_point_3d (src_image->common.transform, &v)) \
return; \
\
unit_x = src_image->common.transform->matrix[0][0]; \
unit_y = src_image->common.transform->matrix[1][1]; \
\
v.vector[0] -= pixman_fixed_1 / 2; \
v.vector[1] -= pixman_fixed_1 / 2; \
\
vy = v.vector[1]; \
\
if (PIXMAN_REPEAT_ ## repeat_mode == PIXMAN_REPEAT_PAD || \
PIXMAN_REPEAT_ ## repeat_mode == PIXMAN_REPEAT_NONE) \
{ \
bilinear_pad_repeat_get_scanline_bounds (src_image->bits.width, v.vector[0], unit_x, \
&left_pad, &left_tz, &width, &right_tz, &right_pad); \
if (PIXMAN_REPEAT_ ## repeat_mode == PIXMAN_REPEAT_PAD) \
{ \
/* PAD repeat does not need special handling for 'transition zones' and */ \
/* they can be combined with 'padding zones' safely */ \
left_pad += left_tz; \
right_pad += right_tz; \
left_tz = right_tz = 0; \
} \
v.vector[0] += left_pad * unit_x; \
} \
\
while (--height >= 0) \
{ \
int weight1, weight2; \
dst = dst_line; \
dst_line += dst_stride; \
vx = v.vector[0]; \
if (have_mask && !mask_is_solid) \
{ \
mask = mask_line; \
mask_line += mask_stride; \
} \
\
y1 = pixman_fixed_to_int (vy); \
weight2 = (vy >> 8) & 0xff; \
if (weight2) \
{ \
/* normal case, both row weights are in 0-255 range and fit unsigned byte */ \
y2 = y1 + 1; \
weight1 = 256 - weight2; \
} \
else \
{ \
/* set both top and bottom row to the same scanline, and weights to 128+128 */ \
y2 = y1; \
weight1 = weight2 = 128; \
} \
vy += unit_y; \
if (PIXMAN_REPEAT_ ## repeat_mode == PIXMAN_REPEAT_PAD) \
{ \
src_type_t *src1, *src2; \
src_type_t buf1[2]; \
src_type_t buf2[2]; \
repeat (PIXMAN_REPEAT_PAD, &y1, src_image->bits.height); \
repeat (PIXMAN_REPEAT_PAD, &y2, src_image->bits.height); \
src1 = src_first_line + src_stride * y1; \
src2 = src_first_line + src_stride * y2; \
\
if (left_pad > 0) \
{ \
buf1[0] = buf1[1] = src1[0]; \
buf2[0] = buf2[1] = src2[0]; \
scanline_func (dst, mask, \
buf1, buf2, left_pad, weight1, weight2, 0, 0, 0, FALSE); \
dst += left_pad; \
if (have_mask && !mask_is_solid) \
mask += left_pad; \
} \
if (width > 0) \
{ \
scanline_func (dst, mask, \
src1, src2, width, weight1, weight2, vx, unit_x, 0, FALSE); \
dst += width; \
if (have_mask && !mask_is_solid) \
mask += width; \
} \
if (right_pad > 0) \
{ \
buf1[0] = buf1[1] = src1[src_image->bits.width - 1]; \
buf2[0] = buf2[1] = src2[src_image->bits.width - 1]; \
scanline_func (dst, mask, \
buf1, buf2, right_pad, weight1, weight2, 0, 0, 0, FALSE); \
} \
} \
else if (PIXMAN_REPEAT_ ## repeat_mode == PIXMAN_REPEAT_NONE) \
{ \
src_type_t *src1, *src2; \
src_type_t buf1[2]; \
src_type_t buf2[2]; \
/* handle top/bottom zero padding by just setting weights to 0 if needed */ \
if (y1 < 0) \
{ \
weight1 = 0; \
y1 = 0; \
} \
if (y1 >= src_image->bits.height) \
{ \
weight1 = 0; \
y1 = src_image->bits.height - 1; \
} \
if (y2 < 0) \
{ \
weight2 = 0; \
y2 = 0; \
} \
if (y2 >= src_image->bits.height) \
{ \
weight2 = 0; \
y2 = src_image->bits.height - 1; \
} \
src1 = src_first_line + src_stride * y1; \
src2 = src_first_line + src_stride * y2; \
\
if (left_pad > 0) \
{ \
buf1[0] = buf1[1] = 0; \
buf2[0] = buf2[1] = 0; \
scanline_func (dst, mask, \
buf1, buf2, left_pad, weight1, weight2, 0, 0, 0, TRUE); \
dst += left_pad; \
if (have_mask && !mask_is_solid) \
mask += left_pad; \
} \
if (left_tz > 0) \
{ \
buf1[0] = 0; \
buf1[1] = src1[0]; \
buf2[0] = 0; \
buf2[1] = src2[0]; \
scanline_func (dst, mask, \
buf1, buf2, left_tz, weight1, weight2, \
pixman_fixed_frac (vx), unit_x, 0, FALSE); \
dst += left_tz; \
if (have_mask && !mask_is_solid) \
mask += left_tz; \
vx += left_tz * unit_x; \
} \
if (width > 0) \
{ \
scanline_func (dst, mask, \
src1, src2, width, weight1, weight2, vx, unit_x, 0, FALSE); \
dst += width; \
if (have_mask && !mask_is_solid) \
mask += width; \
vx += width * unit_x; \
} \
if (right_tz > 0) \
{ \
buf1[0] = src1[src_image->bits.width - 1]; \
buf1[1] = 0; \
buf2[0] = src2[src_image->bits.width - 1]; \
buf2[1] = 0; \
scanline_func (dst, mask, \
buf1, buf2, right_tz, weight1, weight2, \
pixman_fixed_frac (vx), unit_x, 0, FALSE); \
dst += right_tz; \
if (have_mask && !mask_is_solid) \
mask += right_tz; \
} \
if (right_pad > 0) \
{ \
buf1[0] = buf1[1] = 0; \
buf2[0] = buf2[1] = 0; \
scanline_func (dst, mask, \
buf1, buf2, right_pad, weight1, weight2, 0, 0, 0, TRUE); \
} \
} \
else \
{ \
scanline_func (dst, mask, src_first_line + src_stride * y1, \
src_first_line + src_stride * y2, width, \
weight1, weight2, vx, unit_x, max_vx, FALSE); \
} \
} \
}
/* A workaround for old sun studio, see: https://bugs.freedesktop.org/show_bug.cgi?id=32764 */
#define FAST_BILINEAR_MAINLOOP_COMMON(scale_func_name, scanline_func, src_type_t, mask_type_t, \
dst_type_t, repeat_mode, have_mask, mask_is_solid) \
FAST_BILINEAR_MAINLOOP_INT(_ ## scale_func_name, scanline_func, src_type_t, mask_type_t,\
dst_type_t, repeat_mode, have_mask, mask_is_solid)
#define SCALED_BILINEAR_FLAGS \
(FAST_PATH_SCALE_TRANSFORM | \
FAST_PATH_NO_ALPHA_MAP | \
FAST_PATH_BILINEAR_FILTER | \
FAST_PATH_NO_ACCESSORS | \
FAST_PATH_NARROW_FORMAT)
#define SIMPLE_BILINEAR_FAST_PATH_PAD(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_BILINEAR_FLAGS | \
FAST_PATH_PAD_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_null, 0, \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _pad ## _ ## op, \
}
#define SIMPLE_BILINEAR_FAST_PATH_NONE(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_BILINEAR_FLAGS | \
FAST_PATH_NONE_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_null, 0, \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _none ## _ ## op, \
}
#define SIMPLE_BILINEAR_FAST_PATH_COVER(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
SCALED_BILINEAR_FLAGS | FAST_PATH_SAMPLES_COVER_CLIP, \
PIXMAN_null, 0, \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _cover ## _ ## op, \
}
#define SIMPLE_BILINEAR_A8_MASK_FAST_PATH_PAD(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_BILINEAR_FLAGS | \
FAST_PATH_PAD_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_a8, MASK_FLAGS (a8, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _pad ## _ ## op, \
}
#define SIMPLE_BILINEAR_A8_MASK_FAST_PATH_NONE(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_BILINEAR_FLAGS | \
FAST_PATH_NONE_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_a8, MASK_FLAGS (a8, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _none ## _ ## op, \
}
#define SIMPLE_BILINEAR_A8_MASK_FAST_PATH_COVER(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
SCALED_BILINEAR_FLAGS | FAST_PATH_SAMPLES_COVER_CLIP, \
PIXMAN_a8, MASK_FLAGS (a8, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _cover ## _ ## op, \
}
#define SIMPLE_BILINEAR_SOLID_MASK_FAST_PATH_PAD(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_BILINEAR_FLAGS | \
FAST_PATH_PAD_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_solid, MASK_FLAGS (solid, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _pad ## _ ## op, \
}
#define SIMPLE_BILINEAR_SOLID_MASK_FAST_PATH_NONE(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
(SCALED_BILINEAR_FLAGS | \
FAST_PATH_NONE_REPEAT | \
FAST_PATH_X_UNIT_POSITIVE), \
PIXMAN_solid, MASK_FLAGS (solid, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _none ## _ ## op, \
}
#define SIMPLE_BILINEAR_SOLID_MASK_FAST_PATH_COVER(op,s,d,func) \
{ PIXMAN_OP_ ## op, \
PIXMAN_ ## s, \
SCALED_BILINEAR_FLAGS | FAST_PATH_SAMPLES_COVER_CLIP, \
PIXMAN_solid, MASK_FLAGS (solid, FAST_PATH_UNIFIED_ALPHA), \
PIXMAN_ ## d, FAST_PATH_STD_DEST_FLAGS, \
fast_composite_scaled_bilinear_ ## func ## _cover ## _ ## op, \
}
/* Prefer the use of 'cover' variant, because it is faster */
#define SIMPLE_BILINEAR_FAST_PATH(op,s,d,func) \
SIMPLE_BILINEAR_FAST_PATH_COVER (op,s,d,func), \
SIMPLE_BILINEAR_FAST_PATH_NONE (op,s,d,func), \
SIMPLE_BILINEAR_FAST_PATH_PAD (op,s,d,func)
#define SIMPLE_BILINEAR_A8_MASK_FAST_PATH(op,s,d,func) \
SIMPLE_BILINEAR_A8_MASK_FAST_PATH_COVER (op,s,d,func), \
SIMPLE_BILINEAR_A8_MASK_FAST_PATH_NONE (op,s,d,func), \
SIMPLE_BILINEAR_A8_MASK_FAST_PATH_PAD (op,s,d,func)
#define SIMPLE_BILINEAR_SOLID_MASK_FAST_PATH(op,s,d,func) \
SIMPLE_BILINEAR_SOLID_MASK_FAST_PATH_COVER (op,s,d,func), \
SIMPLE_BILINEAR_SOLID_MASK_FAST_PATH_NONE (op,s,d,func), \
SIMPLE_BILINEAR_SOLID_MASK_FAST_PATH_PAD (op,s,d,func)
#endif

Просмотреть файл

@ -36,8 +36,66 @@
#include <stdlib.h>
#include <string.h>
#include "pixman-private.h"
#include "pixman-combine32.h"
#include "pixman-private.h"
static void
general_src_iter_init (pixman_implementation_t *imp, pixman_iter_t *iter)
{
pixman_image_t *image = iter->image;
if (image->type == SOLID)
_pixman_solid_fill_iter_init (image, iter);
else if (image->type == LINEAR)
_pixman_linear_gradient_iter_init (image, iter);
else if (image->type == RADIAL)
_pixman_radial_gradient_iter_init (image, iter);
else if (image->type == CONICAL)
_pixman_conical_gradient_iter_init (image, iter);
else if (image->type == BITS)
_pixman_bits_image_src_iter_init (image, iter);
else
_pixman_log_error (FUNC, "Pixman bug: unknown image type\n");
}
static void
general_dest_iter_init (pixman_implementation_t *imp, pixman_iter_t *iter)
{
if (iter->image->type == BITS)
{
_pixman_bits_image_dest_iter_init (iter->image, iter);
}
else
{
_pixman_log_error (FUNC, "Trying to write to a non-writable image");
}
}
typedef struct op_info_t op_info_t;
struct op_info_t
{
uint8_t src, dst;
};
#define ITER_IGNORE_BOTH \
(ITER_IGNORE_ALPHA | ITER_IGNORE_RGB | ITER_LOCALIZED_ALPHA)
static const op_info_t op_flags[PIXMAN_N_OPERATORS] =
{
/* Src Dst */
{ ITER_IGNORE_BOTH, ITER_IGNORE_BOTH }, /* CLEAR */
{ ITER_LOCALIZED_ALPHA, ITER_IGNORE_BOTH }, /* SRC */
{ ITER_IGNORE_BOTH, ITER_LOCALIZED_ALPHA }, /* DST */
{ 0, ITER_LOCALIZED_ALPHA }, /* OVER */
{ ITER_LOCALIZED_ALPHA, 0 }, /* OVER_REVERSE */
{ ITER_LOCALIZED_ALPHA, ITER_IGNORE_RGB }, /* IN */
{ ITER_IGNORE_RGB, ITER_LOCALIZED_ALPHA }, /* IN_REVERSE */
{ ITER_LOCALIZED_ALPHA, ITER_IGNORE_RGB }, /* OUT */
{ ITER_IGNORE_RGB, ITER_LOCALIZED_ALPHA }, /* OUT_REVERSE */
{ 0, 0 }, /* ATOP */
{ 0, 0 }, /* ATOP_REVERSE */
{ 0, 0 }, /* XOR */
{ ITER_LOCALIZED_ALPHA, ITER_LOCALIZED_ALPHA }, /* ADD */
{ 0, 0 }, /* SATURATE */
};
#define SCANLINE_BUFFER_LENGTH 8192
@ -56,24 +114,28 @@ general_composite_rect (pixman_implementation_t *imp,
int32_t width,
int32_t height)
{
uint8_t stack_scanline_buffer[SCANLINE_BUFFER_LENGTH * 3];
uint8_t *scanline_buffer = stack_scanline_buffer;
uint64_t stack_scanline_buffer[(SCANLINE_BUFFER_LENGTH * 3 + 7) / 8];
uint8_t *scanline_buffer = (uint8_t *) stack_scanline_buffer;
uint8_t *src_buffer, *mask_buffer, *dest_buffer;
fetch_scanline_t fetch_src = NULL, fetch_mask = NULL, fetch_dest = NULL;
pixman_iter_t src_iter, mask_iter, dest_iter;
pixman_combine_32_func_t compose;
store_scanline_t store;
source_image_class_t src_class, mask_class;
pixman_bool_t component_alpha;
uint32_t *bits;
int32_t stride;
int narrow, Bpp;
iter_flags_t narrow, src_flags;
int Bpp;
int i;
narrow =
(src->common.flags & FAST_PATH_NARROW_FORMAT) &&
if ((src->common.flags & FAST_PATH_NARROW_FORMAT) &&
(!mask || mask->common.flags & FAST_PATH_NARROW_FORMAT) &&
(dest->common.flags & FAST_PATH_NARROW_FORMAT);
Bpp = narrow ? 4 : 8;
(dest->common.flags & FAST_PATH_NARROW_FORMAT))
{
narrow = ITER_NARROW;
Bpp = 4;
}
else
{
narrow = 0;
Bpp = 8;
}
if (width * Bpp > SCANLINE_BUFFER_LENGTH)
{
@ -87,90 +149,39 @@ general_composite_rect (pixman_implementation_t *imp,
mask_buffer = src_buffer + width * Bpp;
dest_buffer = mask_buffer + width * Bpp;
src_class = _pixman_image_classify (src,
src_x, src_y,
width, height);
/* src iter */
src_flags = narrow | op_flags[op].src;
mask_class = SOURCE_IMAGE_CLASS_UNKNOWN;
_pixman_implementation_src_iter_init (imp->toplevel, &src_iter, src,
src_x, src_y, width, height,
src_buffer, src_flags);
if (mask)
/* mask iter */
if ((src_flags & (ITER_IGNORE_ALPHA | ITER_IGNORE_RGB)) ==
(ITER_IGNORE_ALPHA | ITER_IGNORE_RGB))
{
mask_class = _pixman_image_classify (mask,
src_x, src_y,
width, height);
}
if (op == PIXMAN_OP_CLEAR)
fetch_src = NULL;
else if (narrow)
fetch_src = _pixman_image_get_scanline_32;
else
fetch_src = _pixman_image_get_scanline_64;
if (!mask || op == PIXMAN_OP_CLEAR)
fetch_mask = NULL;
else if (narrow)
fetch_mask = _pixman_image_get_scanline_32;
else
fetch_mask = _pixman_image_get_scanline_64;
if (op == PIXMAN_OP_CLEAR || op == PIXMAN_OP_SRC)
fetch_dest = NULL;
else if (narrow)
fetch_dest = _pixman_image_get_scanline_32;
else
fetch_dest = _pixman_image_get_scanline_64;
if (narrow)
store = _pixman_image_store_scanline_32;
else
store = _pixman_image_store_scanline_64;
/* Skip the store step and composite directly into the
* destination if the output format of the compose func matches
* the destination format.
*
* If the destination format is a8r8g8b8 then we can always do
* this. If it is x8r8g8b8, then we can only do it if the
* operator doesn't make use of destination alpha.
*/
if ((dest->bits.format == PIXMAN_a8r8g8b8) ||
(dest->bits.format == PIXMAN_x8r8g8b8 &&
(op == PIXMAN_OP_OVER ||
op == PIXMAN_OP_ADD ||
op == PIXMAN_OP_SRC ||
op == PIXMAN_OP_CLEAR ||
op == PIXMAN_OP_IN_REVERSE ||
op == PIXMAN_OP_OUT_REVERSE ||
op == PIXMAN_OP_DST)))
{
if (narrow &&
!dest->common.alpha_map &&
!dest->bits.write_func)
{
store = NULL;
}
}
if (!store)
{
bits = dest->bits.bits;
stride = dest->bits.rowstride;
}
else
{
bits = NULL;
stride = 0;
/* If it doesn't matter what the source is, then it doesn't matter
* what the mask is
*/
mask = NULL;
}
component_alpha =
fetch_src &&
fetch_mask &&
mask &&
mask->common.type == BITS &&
mask->common.component_alpha &&
PIXMAN_FORMAT_RGB (mask->bits.format);
_pixman_implementation_src_iter_init (
imp->toplevel, &mask_iter, mask, mask_x, mask_y, width, height,
mask_buffer, narrow | (component_alpha? 0 : ITER_IGNORE_RGB));
/* dest iter */
_pixman_implementation_dest_iter_init (imp->toplevel, &dest_iter, dest,
dest_x, dest_y, width, height,
dest_buffer,
narrow | op_flags[op].dst);
if (narrow)
{
if (component_alpha)
@ -189,73 +200,20 @@ general_composite_rect (pixman_implementation_t *imp,
if (!compose)
return;
if (!fetch_mask)
mask_buffer = NULL;
for (i = 0; i < height; ++i)
{
/* fill first half of scanline with source */
if (fetch_src)
{
if (fetch_mask)
{
/* fetch mask before source so that fetching of
source can be optimized */
fetch_mask (mask, mask_x, mask_y + i,
width, (void *)mask_buffer, 0);
uint32_t *s, *m, *d;
if (mask_class == SOURCE_IMAGE_CLASS_HORIZONTAL)
fetch_mask = NULL;
}
m = mask_iter.get_scanline (&mask_iter, NULL);
s = src_iter.get_scanline (&src_iter, m);
d = dest_iter.get_scanline (&dest_iter, NULL);
if (src_class == SOURCE_IMAGE_CLASS_HORIZONTAL)
{
fetch_src (src, src_x, src_y + i,
width, (void *)src_buffer, 0);
fetch_src = NULL;
}
else
{
fetch_src (src, src_x, src_y + i,
width, (void *)src_buffer, (void *)mask_buffer);
}
}
else if (fetch_mask)
{
fetch_mask (mask, mask_x, mask_y + i,
width, (void *)mask_buffer, 0);
}
compose (imp->toplevel, op, d, s, m, width);
if (store)
{
/* fill dest into second half of scanline */
if (fetch_dest)
{
fetch_dest (dest, dest_x, dest_y + i,
width, (void *)dest_buffer, 0);
}
/* blend */
compose (imp->toplevel, op,
(void *)dest_buffer,
(void *)src_buffer,
(void *)mask_buffer,
width);
/* write back */
store (&(dest->bits), dest_x, dest_y + i, width,
(void *)dest_buffer);
}
else
{
/* blend */
compose (imp->toplevel, op,
bits + (dest_y + i) * stride + dest_x,
(void *)src_buffer, (void *)mask_buffer, width);
}
dest_iter.write_back (&dest_iter);
}
if (scanline_buffer != stack_scanline_buffer)
if (scanline_buffer != (uint8_t *) stack_scanline_buffer)
free (scanline_buffer);
}
@ -309,6 +267,8 @@ _pixman_implementation_create_general (void)
imp->blt = general_blt;
imp->fill = general_fill;
imp->src_iter_init = general_src_iter_init;
imp->dest_iter_init = general_dest_iter_init;
return imp;
}

Просмотреть файл

@ -30,7 +30,6 @@
#include <assert.h>
#include "pixman-private.h"
#include "pixman-combine32.h"
pixman_bool_t
_pixman_init_gradient (gradient_t * gradient,
@ -47,49 +46,9 @@ _pixman_init_gradient (gradient_t * gradient,
gradient->n_stops = n_stops;
gradient->stop_range = 0xffff;
gradient->common.class = SOURCE_IMAGE_CLASS_UNKNOWN;
return TRUE;
}
/*
* By default, just evaluate the image at 32bpp and expand. Individual image
* types can plug in a better scanline getter if they want to. For example
* we could produce smoother gradients by evaluating them at higher color
* depth, but that's a project for the future.
*/
void
_pixman_image_get_scanline_generic_64 (pixman_image_t * image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t * mask)
{
uint32_t *mask8 = NULL;
/* Contract the mask image, if one exists, so that the 32-bit fetch
* function can use it.
*/
if (mask)
{
mask8 = pixman_malloc_ab (width, sizeof(uint32_t));
if (!mask8)
return;
pixman_contract (mask8, (uint64_t *)mask, width);
}
/* Fetch the source image into the first half of buffer. */
_pixman_image_get_scanline_32 (image, x, y, width, (uint32_t*)buffer, mask8);
/* Expand from 32bpp to 64bpp in place. */
pixman_expand ((uint64_t *)buffer, buffer, PIXMAN_a8r8g8b8, width);
free (mask8);
}
pixman_image_t *
_pixman_image_allocate (void)
{
@ -112,7 +71,7 @@ _pixman_image_allocate (void)
common->alpha_map = NULL;
common->component_alpha = FALSE;
common->ref_count = 1;
common->classify = NULL;
common->property_changed = NULL;
common->client_clip = FALSE;
common->destroy_func = NULL;
common->destroy_data = NULL;
@ -122,44 +81,6 @@ _pixman_image_allocate (void)
return image;
}
source_image_class_t
_pixman_image_classify (pixman_image_t *image,
int x,
int y,
int width,
int height)
{
if (image->common.classify)
return image->common.classify (image, x, y, width, height);
else
return SOURCE_IMAGE_CLASS_UNKNOWN;
}
void
_pixman_image_get_scanline_32 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask)
{
image->common.get_scanline_32 (image, x, y, width, buffer, mask);
}
/* Even thought the type of buffer is uint32_t *, the function actually expects
* a uint64_t *buffer.
*/
void
_pixman_image_get_scanline_64 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *unused)
{
image->common.get_scanline_64 (image, x, y, width, buffer, unused);
}
static void
image_property_changed (pixman_image_t *image)
{
@ -239,54 +160,27 @@ _pixman_image_reset_clip_region (pixman_image_t *image)
image->common.have_clip_region = FALSE;
}
static pixman_bool_t out_of_bounds_workaround = TRUE;
/* Old X servers rely on out-of-bounds accesses when they are asked
* to composite with a window as the source. They create a pixman image
* pointing to some bogus position in memory, but then they set a clip
* region to the position where the actual bits are.
/* Executive Summary: This function is a no-op that only exists
* for historical reasons.
*
* There used to be a bug in the X server where it would rely on
* out-of-bounds accesses when it was asked to composite with a
* window as the source. It would create a pixman image pointing
* to some bogus position in memory, but then set a clip region
* to the position where the actual bits were.
*
* Due to a bug in old versions of pixman, where it would not clip
* against the image bounds when a clip region was set, this would
* actually work. So by default we allow certain out-of-bound access
* to happen unless explicitly disabled.
* actually work. So when the pixman bug was fixed, a workaround was
* added to allow certain out-of-bound accesses. This function disabled
* those workarounds.
*
* Fixed X servers should call this function to disable the workaround.
* Since 0.21.2, pixman doesn't do these workarounds anymore, so now
* this function is a no-op.
*/
PIXMAN_EXPORT void
pixman_disable_out_of_bounds_workaround (void)
{
out_of_bounds_workaround = FALSE;
}
static pixman_bool_t
source_image_needs_out_of_bounds_workaround (bits_image_t *image)
{
if (image->common.clip_sources &&
image->common.repeat == PIXMAN_REPEAT_NONE &&
image->common.have_clip_region &&
out_of_bounds_workaround)
{
if (!image->common.client_clip)
{
/* There is no client clip, so if the clip region extends beyond the
* drawable geometry, it must be because the X server generated the
* bogus clip region.
*/
const pixman_box32_t *extents =
pixman_region32_extents (&image->common.clip_region);
if (extents->x1 >= 0 && extents->x2 <= image->width &&
extents->y1 >= 0 && extents->y2 <= image->height)
{
return FALSE;
}
}
return TRUE;
}
return FALSE;
}
static void
@ -316,8 +210,25 @@ compute_image_info (pixman_image_t *image)
if (image->common.transform->matrix[0][1] == 0 &&
image->common.transform->matrix[1][0] == 0)
{
if (image->common.transform->matrix[0][0] == -pixman_fixed_1 &&
image->common.transform->matrix[1][1] == -pixman_fixed_1)
{
flags |= FAST_PATH_ROTATE_180_TRANSFORM;
}
flags |= FAST_PATH_SCALE_TRANSFORM;
}
else if (image->common.transform->matrix[0][0] == 0 &&
image->common.transform->matrix[1][1] == 0)
{
pixman_fixed_t m01 = image->common.transform->matrix[0][1];
if (m01 == -image->common.transform->matrix[1][0])
{
if (m01 == -pixman_fixed_1)
flags |= FAST_PATH_ROTATE_90_TRANSFORM;
else if (m01 == pixman_fixed_1)
flags |= FAST_PATH_ROTATE_270_TRANSFORM;
}
}
}
if (image->common.transform->matrix[0][0] > 0)
@ -409,6 +320,12 @@ compute_image_info (pixman_image_t *image)
else
{
code = image->bits.format;
if (!image->common.transform &&
image->common.repeat == PIXMAN_REPEAT_NORMAL)
{
flags |= FAST_PATH_SIMPLE_REPEAT | FAST_PATH_SAMPLES_COVER_CLIP;
}
}
if (!PIXMAN_FORMAT_A (image->bits.format) &&
@ -421,9 +338,6 @@ compute_image_info (pixman_image_t *image)
flags |= FAST_PATH_IS_OPAQUE;
}
if (source_image_needs_out_of_bounds_workaround (&image->bits))
flags |= FAST_PATH_NEEDS_WORKAROUND;
if (image->bits.read_func || image->bits.write_func)
flags &= ~FAST_PATH_NO_ACCESSORS;
@ -431,10 +345,25 @@ compute_image_info (pixman_image_t *image)
flags &= ~FAST_PATH_NARROW_FORMAT;
break;
case LINEAR:
case RADIAL:
code = PIXMAN_unknown;
/*
* As explained in pixman-radial-gradient.c, every point of
* the plane has a valid associated radius (and thus will be
* colored) if and only if a is negative (i.e. one of the two
* circles contains the other one).
*/
if (image->radial.a >= 0)
break;
/* Fall through */
case CONICAL:
case LINEAR:
code = PIXMAN_unknown;
if (image->common.repeat != PIXMAN_REPEAT_NONE)
{
int i;
@ -496,7 +425,8 @@ _pixman_image_validate (pixman_image_t *image)
* property_changed() can make use of the flags
* to set up accessors etc.
*/
image->common.property_changed (image);
if (image->common.property_changed)
image->common.property_changed (image);
image->common.dirty = FALSE;
}
@ -577,7 +507,7 @@ pixman_image_set_transform (pixman_image_t * image,
if (common->transform == transform)
return TRUE;
if (memcmp (&id, transform, sizeof (pixman_transform_t)) == 0)
if (!transform || memcmp (&id, transform, sizeof (pixman_transform_t)) == 0)
{
free (common->transform);
common->transform = NULL;
@ -586,6 +516,12 @@ pixman_image_set_transform (pixman_image_t * image,
goto out;
}
if (common->transform &&
memcmp (common->transform, transform, sizeof (pixman_transform_t) == 0))
{
return TRUE;
}
if (common->transform == NULL)
common->transform = malloc (sizeof (pixman_transform_t));
@ -610,6 +546,9 @@ PIXMAN_EXPORT void
pixman_image_set_repeat (pixman_image_t *image,
pixman_repeat_t repeat)
{
if (image->common.repeat == repeat)
return;
image->common.repeat = repeat;
image_property_changed (image);
@ -654,6 +593,9 @@ PIXMAN_EXPORT void
pixman_image_set_source_clipping (pixman_image_t *image,
pixman_bool_t clip_sources)
{
if (image->common.clip_sources == clip_sources)
return;
image->common.clip_sources = clip_sources;
image_property_changed (image);
@ -669,6 +611,9 @@ pixman_image_set_indexed (pixman_image_t * image,
{
bits_image_t *bits = (bits_image_t *)image;
if (bits->indexed == indexed)
return;
bits->indexed = indexed;
image_property_changed (image);
@ -731,6 +676,9 @@ PIXMAN_EXPORT void
pixman_image_set_component_alpha (pixman_image_t *image,
pixman_bool_t component_alpha)
{
if (image->common.component_alpha == component_alpha)
return;
image->common.component_alpha = component_alpha;
image_property_changed (image);
@ -813,12 +761,18 @@ pixman_image_get_format (pixman_image_t *image)
}
uint32_t
_pixman_image_get_solid (pixman_image_t * image,
pixman_format_code_t format)
_pixman_image_get_solid (pixman_implementation_t *imp,
pixman_image_t * image,
pixman_format_code_t format)
{
uint32_t result;
pixman_iter_t iter;
_pixman_image_get_scanline_32 (image, 0, 0, 1, &result, NULL);
_pixman_implementation_src_iter_init (
imp, &iter, image, 0, 0, 1, 1,
(uint8_t *)&result, ITER_NARROW);
result = *iter.get_scanline (&iter, NULL);
/* If necessary, convert RGB <--> BGR. */
if (PIXMAN_FORMAT_TYPE (format) != PIXMAN_TYPE_ARGB)

Просмотреть файл

@ -111,6 +111,20 @@ delegate_fill (pixman_implementation_t *imp,
imp->delegate, bits, stride, bpp, x, y, width, height, xor);
}
static void
delegate_src_iter_init (pixman_implementation_t *imp,
pixman_iter_t * iter)
{
imp->delegate->src_iter_init (imp->delegate, iter);
}
static void
delegate_dest_iter_init (pixman_implementation_t *imp,
pixman_iter_t * iter)
{
imp->delegate->dest_iter_init (imp->delegate, iter);
}
pixman_implementation_t *
_pixman_implementation_create (pixman_implementation_t *delegate,
const pixman_fast_path_t *fast_paths)
@ -133,6 +147,8 @@ _pixman_implementation_create (pixman_implementation_t *delegate,
*/
imp->blt = delegate_blt;
imp->fill = delegate_fill;
imp->src_iter_init = delegate_src_iter_init;
imp->dest_iter_init = delegate_dest_iter_init;
for (i = 0; i < PIXMAN_N_OPERATORS; ++i)
{
@ -143,7 +159,7 @@ _pixman_implementation_create (pixman_implementation_t *delegate,
}
imp->fast_paths = fast_paths;
return imp;
}
@ -225,3 +241,64 @@ _pixman_implementation_fill (pixman_implementation_t *imp,
return (*imp->fill) (imp, bits, stride, bpp, x, y, width, height, xor);
}
static uint32_t *
get_scanline_null (pixman_iter_t *iter, const uint32_t *mask)
{
return NULL;
}
void
_pixman_implementation_src_iter_init (pixman_implementation_t *imp,
pixman_iter_t *iter,
pixman_image_t *image,
int x,
int y,
int width,
int height,
uint8_t *buffer,
iter_flags_t flags)
{
iter->image = image;
iter->buffer = (uint32_t *)buffer;
iter->x = x;
iter->y = y;
iter->width = width;
iter->height = height;
iter->flags = flags;
if (!image)
{
iter->get_scanline = get_scanline_null;
}
else if ((flags & (ITER_IGNORE_ALPHA | ITER_IGNORE_RGB)) ==
(ITER_IGNORE_ALPHA | ITER_IGNORE_RGB))
{
iter->get_scanline = _pixman_iter_get_scanline_noop;
}
else
{
(*imp->src_iter_init) (imp, iter);
}
}
void
_pixman_implementation_dest_iter_init (pixman_implementation_t *imp,
pixman_iter_t *iter,
pixman_image_t *image,
int x,
int y,
int width,
int height,
uint8_t *buffer,
iter_flags_t flags)
{
iter->image = image;
iter->buffer = (uint32_t *)buffer;
iter->x = x;
iter->y = y;
iter->width = width;
iter->height = height;
iter->flags = flags;
(*imp->dest_iter_init) (imp, iter);
}

Просмотреть файл

@ -1,3 +1,4 @@
/* -*- Mode: c; c-basic-offset: 4; tab-width: 8; indent-tabs-mode: t; -*- */
/*
* Copyright © 2000 SuSE, Inc.
* Copyright © 2007 Red Hat, Inc.
@ -30,99 +31,96 @@
#include <stdlib.h>
#include "pixman-private.h"
static source_image_class_t
linear_gradient_classify (pixman_image_t *image,
int x,
int y,
int width,
int height)
static pixman_bool_t
linear_gradient_is_horizontal (pixman_image_t *image,
int x,
int y,
int width,
int height)
{
linear_gradient_t *linear = (linear_gradient_t *)image;
pixman_vector_t v;
pixman_fixed_32_32_t l;
pixman_fixed_48_16_t dx, dy, a, b, off;
pixman_fixed_48_16_t factors[4];
int i;
pixman_fixed_48_16_t dx, dy;
double inc;
image->source.class = SOURCE_IMAGE_CLASS_UNKNOWN;
if (image->common.transform)
{
/* projective transformation */
if (image->common.transform->matrix[2][0] != 0 ||
image->common.transform->matrix[2][1] != 0 ||
image->common.transform->matrix[2][2] == 0)
{
return FALSE;
}
v.vector[0] = image->common.transform->matrix[0][1];
v.vector[1] = image->common.transform->matrix[1][1];
v.vector[2] = image->common.transform->matrix[2][2];
}
else
{
v.vector[0] = 0;
v.vector[1] = pixman_fixed_1;
v.vector[2] = pixman_fixed_1;
}
dx = linear->p2.x - linear->p1.x;
dy = linear->p2.y - linear->p1.y;
l = dx * dx + dy * dy;
if (l)
{
a = (dx << 32) / l;
b = (dy << 32) / l;
}
else
{
a = b = 0;
}
if (l == 0)
return FALSE;
off = (-a * linear->p1.x
-b * linear->p1.y) >> 16;
/*
* compute how much the input of the gradient walked changes
* when moving vertically through the whole image
*/
inc = height * (double) pixman_fixed_1 * pixman_fixed_1 *
(dx * v.vector[0] + dy * v.vector[1]) /
(v.vector[2] * (double) l);
for (i = 0; i < 3; i++)
{
v.vector[0] = pixman_int_to_fixed ((i % 2) * (width - 1) + x);
v.vector[1] = pixman_int_to_fixed ((i / 2) * (height - 1) + y);
v.vector[2] = pixman_fixed_1;
/* check that casting to integer would result in 0 */
if (-1 < inc && inc < 1)
return TRUE;
if (image->common.transform)
{
if (!pixman_transform_point_3d (image->common.transform, &v))
{
image->source.class = SOURCE_IMAGE_CLASS_UNKNOWN;
return image->source.class;
}
}
factors[i] = ((a * v.vector[0] + b * v.vector[1]) >> 16) + off;
}
if (factors[2] == factors[0])
image->source.class = SOURCE_IMAGE_CLASS_HORIZONTAL;
else if (factors[1] == factors[0])
image->source.class = SOURCE_IMAGE_CLASS_VERTICAL;
return image->source.class;
return FALSE;
}
static void
linear_gradient_get_scanline_32 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask)
static uint32_t *
linear_get_scanline_narrow (pixman_iter_t *iter,
const uint32_t *mask)
{
pixman_image_t *image = iter->image;
int x = iter->x;
int y = iter->y;
int width = iter->width;
uint32_t * buffer = iter->buffer;
pixman_vector_t v, unit;
pixman_fixed_32_32_t l;
pixman_fixed_48_16_t dx, dy, a, b, off;
pixman_fixed_48_16_t dx, dy;
gradient_t *gradient = (gradient_t *)image;
source_image_t *source = (source_image_t *)image;
linear_gradient_t *linear = (linear_gradient_t *)image;
uint32_t *end = buffer + width;
pixman_gradient_walker_t walker;
_pixman_gradient_walker_init (&walker, gradient, source->common.repeat);
_pixman_gradient_walker_init (&walker, gradient, image->common.repeat);
/* reference point is the center of the pixel */
v.vector[0] = pixman_int_to_fixed (x) + pixman_fixed_1 / 2;
v.vector[1] = pixman_int_to_fixed (y) + pixman_fixed_1 / 2;
v.vector[2] = pixman_fixed_1;
if (source->common.transform)
if (image->common.transform)
{
if (!pixman_transform_point_3d (source->common.transform, &v))
return;
unit.vector[0] = source->common.transform->matrix[0][0];
unit.vector[1] = source->common.transform->matrix[1][0];
unit.vector[2] = source->common.transform->matrix[2][0];
if (!pixman_transform_point_3d (image->common.transform, &v))
return iter->buffer;
unit.vector[0] = image->common.transform->matrix[0][0];
unit.vector[1] = image->common.transform->matrix[1][0];
unit.vector[2] = image->common.transform->matrix[2][0];
}
else
{
@ -136,31 +134,31 @@ linear_gradient_get_scanline_32 (pixman_image_t *image,
l = dx * dx + dy * dy;
if (l != 0)
if (l == 0 || unit.vector[2] == 0)
{
a = (dx << 32) / l;
b = (dy << 32) / l;
off = (-a * linear->p1.x
-b * linear->p1.y) >> 16;
}
if (l == 0 || (unit.vector[2] == 0 && v.vector[2] == pixman_fixed_1))
{
pixman_fixed_48_16_t inc, t;
/* affine transformation only */
if (l == 0)
pixman_fixed_32_32_t t, next_inc;
double inc;
if (l == 0 || v.vector[2] == 0)
{
t = 0;
inc = 0;
}
else
{
t = ((a * v.vector[0] + b * v.vector[1]) >> 16) + off;
inc = (a * unit.vector[0] + b * unit.vector[1]) >> 16;
}
double invden, v2;
if (source->class == SOURCE_IMAGE_CLASS_VERTICAL)
invden = pixman_fixed_1 * (double) pixman_fixed_1 /
(l * (double) v.vector[2]);
v2 = v.vector[2] * (1. / pixman_fixed_1);
t = ((dx * v.vector[0] + dy * v.vector[1]) -
(dx * linear->p1.x + dy * linear->p1.y) * v2) * invden;
inc = (dx * unit.vector[0] + dy * unit.vector[1]) * invden;
}
next_inc = 0;
if (((pixman_fixed_32_32_t )(inc * width)) == 0)
{
register uint32_t color;
@ -170,90 +168,90 @@ linear_gradient_get_scanline_32 (pixman_image_t *image,
}
else
{
if (!mask)
{
while (buffer < end)
{
*buffer++ = _pixman_gradient_walker_pixel (&walker, t);
t += inc;
}
}
else
{
while (buffer < end)
{
if (*mask++)
*buffer = _pixman_gradient_walker_pixel (&walker, t);
int i;
buffer++;
t += inc;
i = 0;
while (buffer < end)
{
if (!mask || *mask++)
{
*buffer = _pixman_gradient_walker_pixel (&walker,
t + next_inc);
}
i++;
next_inc = inc * i;
buffer++;
}
}
}
else
{
/* projective transformation */
pixman_fixed_48_16_t t;
double t;
if (source->class == SOURCE_IMAGE_CLASS_VERTICAL)
t = 0;
while (buffer < end)
{
register uint32_t color;
if (v.vector[2] == 0)
if (!mask || *mask++)
{
t = 0;
}
else
{
pixman_fixed_48_16_t x, y;
x = ((pixman_fixed_48_16_t) v.vector[0] << 16) / v.vector[2];
y = ((pixman_fixed_48_16_t) v.vector[1] << 16) / v.vector[2];
t = ((a * x + b * y) >> 16) + off;
}
color = _pixman_gradient_walker_pixel (&walker, t);
while (buffer < end)
*buffer++ = color;
}
else
{
while (buffer < end)
{
if (!mask || *mask++)
if (v.vector[2] != 0)
{
if (v.vector[2] == 0)
{
t = 0;
}
else
{
pixman_fixed_48_16_t x, y;
x = ((pixman_fixed_48_16_t)v.vector[0] << 16) / v.vector[2];
y = ((pixman_fixed_48_16_t)v.vector[1] << 16) / v.vector[2];
t = ((a * x + b * y) >> 16) + off;
}
double invden, v2;
*buffer = _pixman_gradient_walker_pixel (&walker, t);
invden = pixman_fixed_1 * (double) pixman_fixed_1 /
(l * (double) v.vector[2]);
v2 = v.vector[2] * (1. / pixman_fixed_1);
t = ((dx * v.vector[0] + dy * v.vector[1]) -
(dx * linear->p1.x + dy * linear->p1.y) * v2) * invden;
}
++buffer;
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
*buffer = _pixman_gradient_walker_pixel (&walker, t);
}
++buffer;
v.vector[0] += unit.vector[0];
v.vector[1] += unit.vector[1];
v.vector[2] += unit.vector[2];
}
}
iter->y++;
return iter->buffer;
}
static void
linear_gradient_property_changed (pixman_image_t *image)
static uint32_t *
linear_get_scanline_wide (pixman_iter_t *iter, const uint32_t *mask)
{
image->common.get_scanline_32 = linear_gradient_get_scanline_32;
image->common.get_scanline_64 = _pixman_image_get_scanline_generic_64;
uint32_t *buffer = linear_get_scanline_narrow (iter, NULL);
pixman_expand ((uint64_t *)buffer, buffer, PIXMAN_a8r8g8b8, iter->width);
return buffer;
}
void
_pixman_linear_gradient_iter_init (pixman_image_t *image, pixman_iter_t *iter)
{
if (linear_gradient_is_horizontal (
iter->image, iter->x, iter->y, iter->width, iter->height))
{
if (iter->flags & ITER_NARROW)
linear_get_scanline_narrow (iter, NULL);
else
linear_get_scanline_wide (iter, NULL);
iter->get_scanline = _pixman_iter_get_scanline_noop;
}
else
{
if (iter->flags & ITER_NARROW)
iter->get_scanline = linear_get_scanline_narrow;
else
iter->get_scanline = linear_get_scanline_wide;
}
}
PIXMAN_EXPORT pixman_image_t *
@ -282,9 +280,6 @@ pixman_image_create_linear_gradient (pixman_point_fixed_t * p1,
linear->p2 = *p2;
image->type = LINEAR;
image->source.class = SOURCE_IMAGE_CLASS_UNKNOWN;
image->common.classify = linear_gradient_classify;
image->common.property_changed = linear_gradient_property_changed;
return image;
}

Просмотреть файл

@ -425,7 +425,8 @@ pixman_transform_is_inverse (const struct pixman_transform *a,
{
struct pixman_transform t;
pixman_transform_multiply (&t, a, b);
if (!pixman_transform_multiply (&t, a, b))
return FALSE;
return pixman_transform_is_identity (&t);
}
@ -464,9 +465,6 @@ pixman_transform_from_pixman_f_transform (struct pixman_transform * t,
return TRUE;
}
static const int a[3] = { 3, 3, 2 };
static const int b[3] = { 2, 1, 1 };
PIXMAN_EXPORT pixman_bool_t
pixman_f_transform_invert (struct pixman_f_transform * dst,
const struct pixman_f_transform *src)

Просмотреть файл

@ -1108,7 +1108,7 @@ mmx_composite_over_n_8888 (pixman_implementation_t *imp,
CHECKPOINT ();
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
if (src == 0)
return;
@ -1187,7 +1187,7 @@ mmx_composite_over_n_0565 (pixman_implementation_t *imp,
CHECKPOINT ();
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
if (src == 0)
return;
@ -1275,7 +1275,7 @@ mmx_composite_over_n_8888_8888_ca (pixman_implementation_t *imp,
CHECKPOINT ();
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -1384,7 +1384,7 @@ mmx_composite_over_8888_n_8888 (pixman_implementation_t *imp,
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, uint32_t, dst_stride, dst_line, 1);
PIXMAN_IMAGE_GET_LINE (src_image, src_x, src_y, uint32_t, src_stride, src_line, 1);
mask = _pixman_image_get_solid (mask_image, dst_image->bits.format);
mask = _pixman_image_get_solid (imp, mask_image, dst_image->bits.format);
mask &= 0xff000000;
mask = mask | mask >> 8 | mask >> 16 | mask >> 24;
vmask = load8888 (mask);
@ -1469,7 +1469,7 @@ mmx_composite_over_x888_n_8888 (pixman_implementation_t *imp,
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, uint32_t, dst_stride, dst_line, 1);
PIXMAN_IMAGE_GET_LINE (src_image, src_x, src_y, uint32_t, src_stride, src_line, 1);
mask = _pixman_image_get_solid (mask_image, dst_image->bits.format);
mask = _pixman_image_get_solid (imp, mask_image, dst_image->bits.format);
mask &= 0xff000000;
mask = mask | mask >> 8 | mask >> 16 | mask >> 24;
@ -1764,7 +1764,7 @@ mmx_composite_over_n_8_8888 (pixman_implementation_t *imp,
CHECKPOINT ();
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -1921,8 +1921,8 @@ pixman_fill_mmx (uint32_t *bits,
"movq %7, %4\n"
"movq %7, %5\n"
"movq %7, %6\n"
: "=y" (v1), "=y" (v2), "=y" (v3),
"=y" (v4), "=y" (v5), "=y" (v6), "=y" (v7)
: "=&y" (v1), "=&y" (v2), "=&y" (v3),
"=&y" (v4), "=&y" (v5), "=&y" (v6), "=y" (v7)
: "y" (vfill));
#endif
@ -2038,7 +2038,7 @@ mmx_composite_src_n_8_8888 (pixman_implementation_t *imp,
CHECKPOINT ();
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -2173,7 +2173,7 @@ mmx_composite_over_n_8_0565 (pixman_implementation_t *imp,
CHECKPOINT ();
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -2532,7 +2532,7 @@ mmx_composite_over_n_8888_0565_ca (pixman_implementation_t *imp,
CHECKPOINT ();
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
srca = src >> 24;
if (src == 0)
@ -2643,7 +2643,7 @@ mmx_composite_in_n_8_8 (pixman_implementation_t *imp,
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, uint8_t, dst_stride, dst_line, 1);
PIXMAN_IMAGE_GET_LINE (mask_image, mask_x, mask_y, uint8_t, mask_stride, mask_line, 1);
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
sa = src >> 24;
@ -2790,7 +2790,7 @@ mmx_composite_add_n_8_8 (pixman_implementation_t *imp,
PIXMAN_IMAGE_GET_LINE (dst_image, dest_x, dest_y, uint8_t, dst_stride, dst_line, 1);
PIXMAN_IMAGE_GET_LINE (mask_image, mask_x, mask_y, uint8_t, mask_stride, mask_line, 1);
src = _pixman_image_get_solid (src_image, dst_image->bits.format);
src = _pixman_image_get_solid (imp, src_image, dst_image->bits.format);
sa = src >> 24;
@ -3340,10 +3340,9 @@ mmx_fill (pixman_implementation_t *imp,
}
pixman_implementation_t *
_pixman_implementation_create_mmx (void)
_pixman_implementation_create_mmx (pixman_implementation_t *fallback)
{
pixman_implementation_t *general = _pixman_implementation_create_fast_path ();
pixman_implementation_t *imp = _pixman_implementation_create (general, mmx_fast_paths);
pixman_implementation_t *imp = _pixman_implementation_create (fallback, mmx_fast_paths);
imp->combine_32[PIXMAN_OP_OVER] = mmx_combine_over_u;
imp->combine_32[PIXMAN_OP_OVER_REVERSE] = mmx_combine_over_reverse_u;

Просмотреть файл

@ -20,7 +20,6 @@
* Images
*/
typedef struct image_common image_common_t;
typedef struct source_image source_image_t;
typedef struct solid_fill solid_fill_t;
typedef struct gradient gradient_t;
typedef struct linear_gradient linear_gradient_t;
@ -61,18 +60,6 @@ typedef enum
SOLID
} image_type_t;
typedef enum
{
SOURCE_IMAGE_CLASS_UNKNOWN,
SOURCE_IMAGE_CLASS_HORIZONTAL,
SOURCE_IMAGE_CLASS_VERTICAL,
} source_image_class_t;
typedef source_image_class_t (*classify_func_t) (pixman_image_t *image,
int x,
int y,
int width,
int height);
typedef void (*property_changed_func_t) (pixman_image_t *image);
struct image_common
@ -97,10 +84,7 @@ struct image_common
int alpha_origin_x;
int alpha_origin_y;
pixman_bool_t component_alpha;
classify_func_t classify;
property_changed_func_t property_changed;
fetch_scanline_t get_scanline_32;
fetch_scanline_t get_scanline_64;
pixman_image_destroy_func_t destroy_func;
void * destroy_data;
@ -109,15 +93,9 @@ struct image_common
pixman_format_code_t extended_format_code;
};
struct source_image
{
image_common_t common;
source_image_class_t class;
};
struct solid_fill
{
source_image_t common;
image_common_t common;
pixman_color_t color;
uint32_t color_32;
@ -126,10 +104,9 @@ struct solid_fill
struct gradient
{
source_image_t common;
image_common_t common;
int n_stops;
pixman_gradient_stop_t *stops;
int stop_range;
};
struct linear_gradient
@ -177,6 +154,9 @@ struct bits_image
uint32_t * free_me;
int rowstride; /* in number of uint32_t's */
fetch_scanline_t get_scanline_32;
fetch_scanline_t get_scanline_64;
fetch_scanline_t fetch_scanline_32;
fetch_pixel_32_t fetch_pixel_32;
store_scanline_t store_scanline_32;
@ -195,7 +175,6 @@ union pixman_image
image_type_t type;
image_common_t common;
bits_image_t bits;
source_image_t source;
gradient_t gradient;
linear_gradient_t linear;
conical_gradient_t conical;
@ -203,59 +182,73 @@ union pixman_image
solid_fill_t solid;
};
typedef struct pixman_iter_t pixman_iter_t;
typedef uint32_t *(* pixman_iter_get_scanline_t) (pixman_iter_t *iter, const uint32_t *mask);
typedef void (* pixman_iter_write_back_t) (pixman_iter_t *iter);
typedef enum
{
ITER_NARROW = (1 << 0),
/* "Localized alpha" is when the alpha channel is used only to compute
* the alpha value of the destination. This means that the computation
* of the RGB values of the result is independent of the alpha value.
*
* For example, the OVER operator has localized alpha for the
* destination, because the RGB values of the result can be computed
* without knowing the destination alpha. Similarly, ADD has localized
* alpha for both source and destination because the RGB values of the
* result can be computed without knowing the alpha value of source or
* destination.
*
* When he destination is xRGB, this is useful knowledge, because then
* we can treat it as if it were ARGB, which means in some cases we can
* avoid copying it to a temporary buffer.
*/
ITER_LOCALIZED_ALPHA = (1 << 1),
ITER_IGNORE_ALPHA = (1 << 2),
ITER_IGNORE_RGB = (1 << 3)
} iter_flags_t;
struct pixman_iter_t
{
/* These are initialized by _pixman_implementation_{src,dest}_init */
pixman_image_t * image;
uint32_t * buffer;
int x, y;
int width;
int height;
iter_flags_t flags;
/* These function pointers are initialized by the implementation */
pixman_iter_get_scanline_t get_scanline;
pixman_iter_write_back_t write_back;
/* These fields are scratch data that implementations can use */
uint8_t * bits;
int stride;
};
void
_pixman_bits_image_setup_accessors (bits_image_t *image);
void
_pixman_image_get_scanline_generic_64 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask);
source_image_class_t
_pixman_image_classify (pixman_image_t *image,
int x,
int y,
int width,
int height);
_pixman_bits_image_src_iter_init (pixman_image_t *image, pixman_iter_t *iter);
void
_pixman_image_get_scanline_32 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask);
/* Even thought the type of buffer is uint32_t *, the function actually expects
* a uint64_t *buffer.
*/
void
_pixman_image_get_scanline_64 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *unused);
_pixman_bits_image_dest_iter_init (pixman_image_t *image, pixman_iter_t *iter);
void
_pixman_image_store_scanline_32 (bits_image_t * image,
int x,
int y,
int width,
const uint32_t *buffer);
_pixman_solid_fill_iter_init (pixman_image_t *image, pixman_iter_t *iter);
/* Even though the type of buffer is uint32_t *, the function
* actually expects a uint64_t *buffer.
*/
void
_pixman_image_store_scanline_64 (bits_image_t * image,
int x,
int y,
int width,
const uint32_t *buffer);
_pixman_linear_gradient_iter_init (pixman_image_t *image, pixman_iter_t *iter);
void
_pixman_radial_gradient_iter_init (pixman_image_t *image, pixman_iter_t *iter);
void
_pixman_conical_gradient_iter_init (pixman_image_t *image, pixman_iter_t *iter);
pixman_image_t *
_pixman_image_allocate (void);
@ -270,10 +263,6 @@ _pixman_image_reset_clip_region (pixman_image_t *image);
void
_pixman_image_validate (pixman_image_t *image);
uint32_t
_pixman_image_get_solid (pixman_image_t * image,
pixman_format_code_t format);
#define PIXMAN_IMAGE_GET_LINE(image, x, y, type, out_stride, line, mul) \
do \
{ \
@ -406,6 +395,8 @@ typedef pixman_bool_t (*pixman_fill_func_t) (pixman_implementation_t *imp,
int width,
int height,
uint32_t xor);
typedef void (*pixman_iter_init_func_t) (pixman_implementation_t *imp,
pixman_iter_t *iter);
void _pixman_setup_combiner_functions_32 (pixman_implementation_t *imp);
void _pixman_setup_combiner_functions_64 (pixman_implementation_t *imp);
@ -427,9 +418,11 @@ struct pixman_implementation_t
pixman_implementation_t * toplevel;
pixman_implementation_t * delegate;
const pixman_fast_path_t * fast_paths;
pixman_blt_func_t blt;
pixman_fill_func_t fill;
pixman_iter_init_func_t src_iter_init;
pixman_iter_init_func_t dest_iter_init;
pixman_combine_32_func_t combine_32[PIXMAN_N_OPERATORS];
pixman_combine_32_func_t combine_32_ca[PIXMAN_N_OPERATORS];
@ -437,6 +430,11 @@ struct pixman_implementation_t
pixman_combine_64_func_t combine_64_ca[PIXMAN_N_OPERATORS];
};
uint32_t
_pixman_image_get_solid (pixman_implementation_t *imp,
pixman_image_t * image,
pixman_format_code_t format);
pixman_implementation_t *
_pixman_implementation_create (pixman_implementation_t *delegate,
const pixman_fast_path_t *fast_paths);
@ -496,36 +494,58 @@ _pixman_implementation_fill (pixman_implementation_t *imp,
int height,
uint32_t xor);
void
_pixman_implementation_src_iter_init (pixman_implementation_t *imp,
pixman_iter_t *iter,
pixman_image_t *image,
int x,
int y,
int width,
int height,
uint8_t *buffer,
iter_flags_t flags);
void
_pixman_implementation_dest_iter_init (pixman_implementation_t *imp,
pixman_iter_t *iter,
pixman_image_t *image,
int x,
int y,
int width,
int height,
uint8_t *buffer,
iter_flags_t flags);
/* Specific implementations */
pixman_implementation_t *
_pixman_implementation_create_general (void);
pixman_implementation_t *
_pixman_implementation_create_fast_path (void);
_pixman_implementation_create_fast_path (pixman_implementation_t *fallback);
#ifdef USE_MMX
pixman_implementation_t *
_pixman_implementation_create_mmx (void);
_pixman_implementation_create_mmx (pixman_implementation_t *fallback);
#endif
#ifdef USE_SSE2
pixman_implementation_t *
_pixman_implementation_create_sse2 (void);
_pixman_implementation_create_sse2 (pixman_implementation_t *fallback);
#endif
#ifdef USE_ARM_SIMD
pixman_implementation_t *
_pixman_implementation_create_arm_simd (void);
_pixman_implementation_create_arm_simd (pixman_implementation_t *fallback);
#endif
#ifdef USE_ARM_NEON
pixman_implementation_t *
_pixman_implementation_create_arm_neon (void);
_pixman_implementation_create_arm_neon (pixman_implementation_t *fallback);
#endif
#ifdef USE_VMX
pixman_implementation_t *
_pixman_implementation_create_vmx (void);
_pixman_implementation_create_vmx (pixman_implementation_t *fallback);
#endif
pixman_implementation_t *
@ -536,6 +556,8 @@ _pixman_choose_implementation (void);
/*
* Utilities
*/
uint32_t *
_pixman_iter_get_scanline_noop (pixman_iter_t *iter, const uint32_t *mask);
/* These "formats" all have depth 0, so they
* will never clash with any real ones
@ -563,14 +585,17 @@ _pixman_choose_implementation (void);
#define FAST_PATH_NEAREST_FILTER (1 << 11)
#define FAST_PATH_HAS_TRANSFORM (1 << 12)
#define FAST_PATH_IS_OPAQUE (1 << 13)
#define FAST_PATH_NEEDS_WORKAROUND (1 << 14)
#define FAST_PATH_NO_NORMAL_REPEAT (1 << 14)
#define FAST_PATH_NO_NONE_REPEAT (1 << 15)
#define FAST_PATH_SAMPLES_COVER_CLIP (1 << 16)
#define FAST_PATH_X_UNIT_POSITIVE (1 << 17)
#define FAST_PATH_AFFINE_TRANSFORM (1 << 18)
#define FAST_PATH_Y_UNIT_ZERO (1 << 19)
#define FAST_PATH_BILINEAR_FILTER (1 << 20)
#define FAST_PATH_NO_NORMAL_REPEAT (1 << 21)
#define FAST_PATH_ROTATE_90_TRANSFORM (1 << 21)
#define FAST_PATH_ROTATE_180_TRANSFORM (1 << 22)
#define FAST_PATH_ROTATE_270_TRANSFORM (1 << 23)
#define FAST_PATH_SIMPLE_REPEAT (1 << 24)
#define FAST_PATH_PAD_REPEAT \
(FAST_PATH_NO_NONE_REPEAT | \

Просмотреть файл

@ -96,8 +96,24 @@ radial_compute_color (double a,
if (a == 0)
{
return _pixman_gradient_walker_pixel (walker,
pixman_fixed_1 / 2 * c / b);
double t;
if (b == 0)
return 0;
t = pixman_fixed_1 / 2 * c / b;
if (repeat == PIXMAN_REPEAT_NONE)
{
if (0 <= t && t <= pixman_fixed_1)
return _pixman_gradient_walker_pixel (walker, t);
}
else
{
if (t * dr > mindr)
return _pixman_gradient_walker_pixel (walker, t);
}
return 0;
}
det = fdot (b, a, 0, b, -c, 0);
@ -128,13 +144,8 @@ radial_compute_color (double a,
return 0;
}
static void
radial_gradient_get_scanline_32 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask)
static uint32_t *
radial_get_scanline_narrow (pixman_iter_t *iter, const uint32_t *mask)
{
/*
* Implementation of radial gradients following the PDF specification.
@ -217,9 +228,13 @@ radial_gradient_get_scanline_32 (pixman_image_t *image,
* <=> for every p, the radiuses associated with the two t solutions
* have opposite sign
*/
pixman_image_t *image = iter->image;
int x = iter->x;
int y = iter->y;
int width = iter->width;
uint32_t *buffer = iter->buffer;
gradient_t *gradient = (gradient_t *)image;
source_image_t *source = (source_image_t *)image;
radial_gradient_t *radial = (radial_gradient_t *)image;
uint32_t *end = buffer + width;
pixman_gradient_walker_t walker;
@ -230,16 +245,16 @@ radial_gradient_get_scanline_32 (pixman_image_t *image,
v.vector[1] = pixman_int_to_fixed (y) + pixman_fixed_1 / 2;
v.vector[2] = pixman_fixed_1;
_pixman_gradient_walker_init (&walker, gradient, source->common.repeat);
_pixman_gradient_walker_init (&walker, gradient, image->common.repeat);
if (source->common.transform)
if (image->common.transform)
{
if (!pixman_transform_point_3d (source->common.transform, &v))
return;
if (!pixman_transform_point_3d (image->common.transform, &v))
return iter->buffer;
unit.vector[0] = source->common.transform->matrix[0][0];
unit.vector[1] = source->common.transform->matrix[1][0];
unit.vector[2] = source->common.transform->matrix[2][0];
unit.vector[0] = image->common.transform->matrix[0][0];
unit.vector[1] = image->common.transform->matrix[1][0];
unit.vector[2] = image->common.transform->matrix[2][0];
}
else
{
@ -290,10 +305,11 @@ radial_gradient_get_scanline_32 (pixman_image_t *image,
db = dot (unit.vector[0], unit.vector[1], 0,
radial->delta.x, radial->delta.y, 0);
c = dot (v.vector[0], v.vector[1], -radial->c1.radius,
c = dot (v.vector[0], v.vector[1],
-((pixman_fixed_48_16_t) radial->c1.radius),
v.vector[0], v.vector[1], radial->c1.radius);
dc = dot (2 * v.vector[0] + unit.vector[0],
2 * v.vector[1] + unit.vector[1],
dc = dot (2 * (pixman_fixed_48_16_t) v.vector[0] + unit.vector[0],
2 * (pixman_fixed_48_16_t) v.vector[1] + unit.vector[1],
0,
unit.vector[0], unit.vector[1], 0);
ddc = 2 * dot (unit.vector[0], unit.vector[1], 0,
@ -308,7 +324,7 @@ radial_gradient_get_scanline_32 (pixman_image_t *image,
radial->delta.radius,
radial->mindr,
&walker,
source->common.repeat);
image->common.repeat);
}
b += db;
@ -353,14 +369,14 @@ radial_gradient_get_scanline_32 (pixman_image_t *image,
radial->delta.radius,
radial->mindr,
&walker,
source->common.repeat);
image->common.repeat);
}
else
{
*buffer = 0;
}
}
++buffer;
v.vector[0] += unit.vector[0];
@ -368,13 +384,28 @@ radial_gradient_get_scanline_32 (pixman_image_t *image,
v.vector[2] += unit.vector[2];
}
}
iter->y++;
return iter->buffer;
}
static void
radial_gradient_property_changed (pixman_image_t *image)
static uint32_t *
radial_get_scanline_wide (pixman_iter_t *iter, const uint32_t *mask)
{
image->common.get_scanline_32 = radial_gradient_get_scanline_32;
image->common.get_scanline_64 = _pixman_image_get_scanline_generic_64;
uint32_t *buffer = radial_get_scanline_narrow (iter, NULL);
pixman_expand ((uint64_t *)buffer, buffer, PIXMAN_a8r8g8b8, iter->width);
return buffer;
}
void
_pixman_radial_gradient_iter_init (pixman_image_t *image, pixman_iter_t *iter)
{
if (iter->flags & ITER_NARROW)
iter->get_scanline = radial_get_scanline_narrow;
else
iter->get_scanline = radial_get_scanline_wide;
}
PIXMAN_EXPORT pixman_image_t *
@ -424,8 +455,6 @@ pixman_image_create_radial_gradient (pixman_point_fixed_t * inner,
radial->mindr = -1. * pixman_fixed_1 * radial->c1.radius;
image->common.property_changed = radial_gradient_property_changed;
return image;
}

Просмотреть файл

@ -2545,8 +2545,7 @@ bitmap_addrect (region_type_t *reg,
((r-1)->y1 == ry1) && ((r-1)->y2 == ry2) &&
((r-1)->x1 <= rx1) && ((r-1)->x2 >= rx2))))
{
if (!reg->data ||
reg->data->numRects == reg->data->size)
if (reg->data->numRects == reg->data->size)
{
if (!pixman_rect_alloc (reg, 1))
return NULL;
@ -2590,6 +2589,8 @@ PREFIX (_init_from_image) (region_type_t *region,
PREFIX(_init) (region);
critical_if_fail (region->data);
return_if_fail (image->type == BITS);
return_if_fail (image->bits.format == PIXMAN_a1);

Просмотреть файл

@ -26,54 +26,29 @@
#endif
#include "pixman-private.h"
static void
solid_fill_get_scanline_32 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask)
void
_pixman_solid_fill_iter_init (pixman_image_t *image, pixman_iter_t *iter)
{
uint32_t *end = buffer + width;
uint32_t color = image->solid.color_32;
if (iter->flags & ITER_NARROW)
{
uint32_t *b = (uint32_t *)iter->buffer;
uint32_t *e = b + iter->width;
uint32_t color = iter->image->solid.color_32;
while (buffer < end)
*(buffer++) = color;
while (b < e)
*(b++) = color;
}
else
{
uint64_t *b = (uint64_t *)iter->buffer;
uint64_t *e = b + iter->width;
uint64_t color = image->solid.color_64;
return;
}
while (b < e)
*(b++) = color;
}
static void
solid_fill_get_scanline_64 (pixman_image_t *image,
int x,
int y,
int width,
uint32_t * buffer,
const uint32_t *mask)
{
uint64_t *b = (uint64_t *)buffer;
uint64_t *e = b + width;
uint64_t color = image->solid.color_64;
while (b < e)
*(b++) = color;
}
static source_image_class_t
solid_fill_classify (pixman_image_t *image,
int x,
int y,
int width,
int height)
{
return (image->source.class = SOURCE_IMAGE_CLASS_HORIZONTAL);
}
static void
solid_fill_property_changed (pixman_image_t *image)
{
image->common.get_scanline_32 = solid_fill_get_scanline_32;
image->common.get_scanline_64 = solid_fill_get_scanline_64;
iter->get_scanline = _pixman_iter_get_scanline_noop;
}
static uint32_t
@ -109,10 +84,6 @@ pixman_image_create_solid_fill (pixman_color_t *color)
img->solid.color_32 = color_to_uint32 (color);
img->solid.color_64 = color_to_uint64 (color);
img->source.class = SOURCE_IMAGE_CLASS_UNKNOWN;
img->common.classify = solid_fill_classify;
img->common.property_changed = solid_fill_property_changed;
return img;
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,4 +1,5 @@
/*
* Copyright © 2002 Keith Packard, member of The XFree86 Project, Inc.
* Copyright © 2004 Keith Packard
*
* Permission to use, copy, modify, distribute, and sell this software and its
@ -25,6 +26,7 @@
#endif
#include <stdio.h>
#include <stdlib.h>
#include "pixman-private.h"
/*
@ -235,7 +237,6 @@ pixman_add_traps (pixman_image_t * image,
pixman_trap_t * traps)
{
int bpp;
int width;
int height;
pixman_fixed_t x_off_fixed;
@ -245,7 +246,6 @@ pixman_add_traps (pixman_image_t * image,
_pixman_image_validate (image);
width = image->bits.width;
height = image->bits.height;
bpp = PIXMAN_FORMAT_BPP (image->bits.format);
@ -349,10 +349,8 @@ pixman_rasterize_trapezoid (pixman_image_t * image,
int y_off)
{
int bpp;
int width;
int height;
pixman_fixed_t x_off_fixed;
pixman_fixed_t y_off_fixed;
pixman_edge_t l, r;
pixman_fixed_t t, b;
@ -364,11 +362,9 @@ pixman_rasterize_trapezoid (pixman_image_t * image,
if (!pixman_trapezoid_valid (trap))
return;
width = image->bits.width;
height = image->bits.height;
bpp = PIXMAN_FORMAT_BPP (image->bits.format);
x_off_fixed = pixman_int_to_fixed (x_off);
y_off_fixed = pixman_int_to_fixed (y_off);
t = trap->top + y_off_fixed;
@ -390,3 +386,272 @@ pixman_rasterize_trapezoid (pixman_image_t * image,
pixman_rasterize_edges (image, &l, &r, t, b);
}
}
PIXMAN_EXPORT void
pixman_composite_trapezoids (pixman_op_t op,
pixman_image_t * src,
pixman_image_t * dst,
pixman_format_code_t mask_format,
int x_src,
int y_src,
int x_dst,
int y_dst,
int n_traps,
const pixman_trapezoid_t * traps)
{
int i;
if (n_traps <= 0)
return;
_pixman_image_validate (src);
_pixman_image_validate (dst);
if (op == PIXMAN_OP_ADD &&
(src->common.flags & FAST_PATH_IS_OPAQUE) &&
(mask_format == dst->common.extended_format_code) &&
!(dst->common.have_clip_region))
{
for (i = 0; i < n_traps; ++i)
{
const pixman_trapezoid_t *trap = &(traps[i]);
if (!pixman_trapezoid_valid (trap))
continue;
pixman_rasterize_trapezoid (dst, trap, 0, 0);
}
}
else
{
pixman_image_t *tmp;
pixman_box32_t box;
int x_rel, y_rel;
box.x1 = INT32_MAX;
box.y1 = INT32_MAX;
box.x2 = INT32_MIN;
box.y2 = INT32_MIN;
for (i = 0; i < n_traps; ++i)
{
const pixman_trapezoid_t *trap = &(traps[i]);
int y1, y2;
if (!pixman_trapezoid_valid (trap))
continue;
y1 = pixman_fixed_to_int (trap->top);
if (y1 < box.y1)
box.y1 = y1;
y2 = pixman_fixed_to_int (pixman_fixed_ceil (trap->bottom));
if (y2 > box.y2)
box.y2 = y2;
#define EXTEND_MIN(x) \
if (pixman_fixed_to_int ((x)) < box.x1) \
box.x1 = pixman_fixed_to_int ((x));
#define EXTEND_MAX(x) \
if (pixman_fixed_to_int (pixman_fixed_ceil ((x))) > box.x2) \
box.x2 = pixman_fixed_to_int (pixman_fixed_ceil ((x)));
#define EXTEND(x) \
EXTEND_MIN(x); \
EXTEND_MAX(x);
EXTEND(trap->left.p1.x);
EXTEND(trap->left.p2.x);
EXTEND(trap->right.p1.x);
EXTEND(trap->right.p2.x);
}
if (box.x1 >= box.x2 || box.y1 >= box.y2)
return;
tmp = pixman_image_create_bits (
mask_format, box.x2 - box.x1, box.y2 - box.y1, NULL, -1);
for (i = 0; i < n_traps; ++i)
{
const pixman_trapezoid_t *trap = &(traps[i]);
if (!pixman_trapezoid_valid (trap))
continue;
pixman_rasterize_trapezoid (tmp, trap, - box.x1, - box.y1);
}
x_rel = box.x1 + x_src - x_dst;
y_rel = box.y1 + y_src - y_dst;
pixman_image_composite (op, src, tmp, dst,
x_rel, y_rel, 0, 0, box.x1, box.y1,
box.x2 - box.x1, box.y2 - box.y1);
pixman_image_unref (tmp);
}
}
static int
greater_y (const pixman_point_fixed_t *a, const pixman_point_fixed_t *b)
{
if (a->y == b->y)
return a->x > b->x;
return a->y > b->y;
}
/*
* Note that the definition of this function is a bit odd because
* of the X coordinate space (y increasing downwards).
*/
static int
clockwise (const pixman_point_fixed_t *ref,
const pixman_point_fixed_t *a,
const pixman_point_fixed_t *b)
{
pixman_point_fixed_t ad, bd;
ad.x = a->x - ref->x;
ad.y = a->y - ref->y;
bd.x = b->x - ref->x;
bd.y = b->y - ref->y;
return ((pixman_fixed_32_32_t) bd.y * ad.x -
(pixman_fixed_32_32_t) ad.y * bd.x) < 0;
}
static void
triangle_to_trapezoids (const pixman_triangle_t *tri, pixman_trapezoid_t *traps)
{
const pixman_point_fixed_t *top, *left, *right, *tmp;
top = &tri->p1;
left = &tri->p2;
right = &tri->p3;
if (greater_y (top, left))
{
tmp = left;
left = top;
top = tmp;
}
if (greater_y (top, right))
{
tmp = right;
right = top;
top = tmp;
}
if (clockwise (top, right, left))
{
tmp = right;
right = left;
left = tmp;
}
/*
* Two cases:
*
* + +
* / \ / \
* / \ / \
* / + + \
* / -- -- \
* / -- -- \
* / --- --- \
* +-- --+
*/
traps->top = top->y;
traps->left.p1 = *top;
traps->left.p2 = *left;
traps->right.p1 = *top;
traps->right.p2 = *right;
if (right->y < left->y)
traps->bottom = right->y;
else
traps->bottom = left->y;
traps++;
*traps = *(traps - 1);
if (right->y < left->y)
{
traps->top = right->y;
traps->bottom = left->y;
traps->right.p1 = *right;
traps->right.p2 = *left;
}
else
{
traps->top = left->y;
traps->bottom = right->y;
traps->left.p1 = *left;
traps->left.p2 = *right;
}
}
static pixman_trapezoid_t *
convert_triangles (int n_tris, const pixman_triangle_t *tris)
{
pixman_trapezoid_t *traps;
int i;
if (n_tris <= 0)
return NULL;
traps = pixman_malloc_ab (n_tris, 2 * sizeof (pixman_trapezoid_t));
if (!traps)
return NULL;
for (i = 0; i < n_tris; ++i)
triangle_to_trapezoids (&(tris[i]), traps + 2 * i);
return traps;
}
PIXMAN_EXPORT void
pixman_composite_triangles (pixman_op_t op,
pixman_image_t * src,
pixman_image_t * dst,
pixman_format_code_t mask_format,
int x_src,
int y_src,
int x_dst,
int y_dst,
int n_tris,
const pixman_triangle_t * tris)
{
pixman_trapezoid_t *traps;
if ((traps = convert_triangles (n_tris, tris)))
{
pixman_composite_trapezoids (op, src, dst, mask_format,
x_src, y_src, x_dst, y_dst,
n_tris * 2, traps);
free (traps);
}
}
PIXMAN_EXPORT void
pixman_add_triangles (pixman_image_t *image,
int32_t x_off,
int32_t y_off,
int n_tris,
const pixman_triangle_t *tris)
{
pixman_trapezoid_t *traps;
if ((traps = convert_triangles (n_tris, tris)))
{
pixman_add_trapezoids (image, x_off, y_off,
n_tris * 2, traps);
free (traps);
}
}

Просмотреть файл

@ -167,6 +167,12 @@ pixman_contract (uint32_t * dst,
}
}
uint32_t *
_pixman_iter_get_scanline_noop (pixman_iter_t *iter, const uint32_t *mask)
{
return iter->buffer;
}
#define N_TMP_BOXES (16)
pixman_bool_t

Просмотреть файл

@ -33,9 +33,9 @@
#define PIXMAN_VERSION_MAJOR 0
#define PIXMAN_VERSION_MINOR 19
#define PIXMAN_VERSION_MICRO 1
#define PIXMAN_VERSION_MICRO 5
#define PIXMAN_VERSION_STRING "0.19.1"
#define PIXMAN_VERSION_STRING "0.19.5"
#define PIXMAN_VERSION_ENCODE(major, minor, micro) ( \
((major) * 10000) \

Просмотреть файл

@ -1613,10 +1613,9 @@ static const pixman_fast_path_t vmx_fast_paths[] =
};
pixman_implementation_t *
_pixman_implementation_create_vmx (void)
_pixman_implementation_create_vmx (pixman_implementation_t *fallback)
{
pixman_implementation_t *fast = _pixman_implementation_create_fast_path ();
pixman_implementation_t *imp = _pixman_implementation_create (fast, vmx_fast_paths);
pixman_implementation_t *imp = _pixman_implementation_create (fallback, vmx_fast_paths);
/* Set up function pointers */

Просмотреть файл

@ -30,14 +30,23 @@
#include <stdlib.h>
static pixman_implementation_t *global_implementation;
#ifdef TOOLCHAIN_SUPPORTS_ATTRIBUTE_CONSTRUCTOR
static void __attribute__((constructor))
pixman_constructor (void)
{
global_implementation = _pixman_choose_implementation ();
}
#endif
static force_inline pixman_implementation_t *
get_implementation (void)
{
static pixman_implementation_t *global_implementation;
#ifndef TOOLCHAIN_SUPPORTS_ATTRIBUTE_CONSTRUCTOR
if (!global_implementation)
global_implementation = _pixman_choose_implementation ();
#endif
return global_implementation;
}
@ -153,57 +162,6 @@ optimize_operator (pixman_op_t op,
return operator_table[op].opaque_info[is_dest_opaque | is_source_opaque];
}
static void
apply_workaround (pixman_image_t *image,
int32_t * x,
int32_t * y,
uint32_t ** save_bits,
int * save_dx,
int * save_dy)
{
if (image && (image->common.flags & FAST_PATH_NEEDS_WORKAROUND))
{
/* Some X servers generate images that point to the
* wrong place in memory, but then set the clip region
* to point to the right place. Because of an old bug
* in pixman, this would actually work.
*
* Here we try and undo the damage
*/
int bpp = PIXMAN_FORMAT_BPP (image->bits.format) / 8;
pixman_box32_t *extents;
uint8_t *t;
int dx, dy;
extents = pixman_region32_extents (&(image->common.clip_region));
dx = extents->x1;
dy = extents->y1;
*save_bits = image->bits.bits;
*x -= dx;
*y -= dy;
pixman_region32_translate (&(image->common.clip_region), -dx, -dy);
t = (uint8_t *)image->bits.bits;
t += dy * image->bits.rowstride * 4 + dx * bpp;
image->bits.bits = (uint32_t *)t;
*save_dx = dx;
*save_dy = dy;
}
}
static void
unapply_workaround (pixman_image_t *image, uint32_t *bits, int dx, int dy)
{
if (image && (image->common.flags & FAST_PATH_NEEDS_WORKAROUND))
{
image->bits.bits = bits;
pixman_region32_translate (&image->common.clip_region, dx, dy);
}
}
/*
* Computing composite region
*/
@ -377,6 +335,126 @@ pixman_compute_composite_region32 (pixman_region32_t * region,
return TRUE;
}
static void
walk_region_internal (pixman_implementation_t *imp,
pixman_op_t op,
pixman_image_t * src_image,
pixman_image_t * mask_image,
pixman_image_t * dst_image,
int32_t src_x,
int32_t src_y,
int32_t mask_x,
int32_t mask_y,
int32_t dest_x,
int32_t dest_y,
int32_t width,
int32_t height,
pixman_bool_t src_repeat,
pixman_bool_t mask_repeat,
pixman_region32_t * region,
pixman_composite_func_t composite_rect)
{
int w, h, w_this, h_this;
int x_msk, y_msk, x_src, y_src, x_dst, y_dst;
int src_dy = src_y - dest_y;
int src_dx = src_x - dest_x;
int mask_dy = mask_y - dest_y;
int mask_dx = mask_x - dest_x;
const pixman_box32_t *pbox;
int n;
pbox = pixman_region32_rectangles (region, &n);
/* Fast path for non-repeating sources */
if (!src_repeat && !mask_repeat)
{
while (n--)
{
(*composite_rect) (imp, op,
src_image, mask_image, dst_image,
pbox->x1 + src_dx,
pbox->y1 + src_dy,
pbox->x1 + mask_dx,
pbox->y1 + mask_dy,
pbox->x1,
pbox->y1,
pbox->x2 - pbox->x1,
pbox->y2 - pbox->y1);
pbox++;
}
return;
}
while (n--)
{
h = pbox->y2 - pbox->y1;
y_src = pbox->y1 + src_dy;
y_msk = pbox->y1 + mask_dy;
y_dst = pbox->y1;
while (h)
{
h_this = h;
w = pbox->x2 - pbox->x1;
x_src = pbox->x1 + src_dx;
x_msk = pbox->x1 + mask_dx;
x_dst = pbox->x1;
if (mask_repeat)
{
y_msk = MOD (y_msk, mask_image->bits.height);
if (h_this > mask_image->bits.height - y_msk)
h_this = mask_image->bits.height - y_msk;
}
if (src_repeat)
{
y_src = MOD (y_src, src_image->bits.height);
if (h_this > src_image->bits.height - y_src)
h_this = src_image->bits.height - y_src;
}
while (w)
{
w_this = w;
if (mask_repeat)
{
x_msk = MOD (x_msk, mask_image->bits.width);
if (w_this > mask_image->bits.width - x_msk)
w_this = mask_image->bits.width - x_msk;
}
if (src_repeat)
{
x_src = MOD (x_src, src_image->bits.width);
if (w_this > src_image->bits.width - x_src)
w_this = src_image->bits.width - x_src;
}
(*composite_rect) (imp, op,
src_image, mask_image, dst_image,
x_src, y_src, x_msk, y_msk, x_dst, y_dst,
w_this, h_this);
w -= w_this;
x_src += w_this;
x_msk += w_this;
x_dst += w_this;
}
h -= h_this;
y_src += h_this;
y_msk += h_this;
y_dst += h_this;
}
pbox++;
}
}
#define N_CACHED_FAST_PATHS 8
typedef struct
@ -732,13 +810,6 @@ pixman_image_composite32 (pixman_op_t op,
uint32_t src_flags, mask_flags, dest_flags;
pixman_region32_t region;
pixman_box32_t *extents;
uint32_t *src_bits;
int src_dx, src_dy;
uint32_t *mask_bits;
int mask_dx, mask_dy;
uint32_t *dest_bits;
int dest_dx, dest_dy;
pixman_bool_t need_workaround;
pixman_implementation_t *imp;
pixman_composite_func_t func;
@ -776,16 +847,6 @@ pixman_image_composite32 (pixman_op_t op,
src_format = mask_format = PIXMAN_rpixbuf;
}
/* Check for workaround */
need_workaround = (src_flags | mask_flags | dest_flags) & FAST_PATH_NEEDS_WORKAROUND;
if (need_workaround)
{
apply_workaround (src, &src_x, &src_y, &src_bits, &src_dx, &src_dy);
apply_workaround (mask, &mask_x, &mask_y, &mask_bits, &mask_dx, &mask_dy);
apply_workaround (dest, &dest_x, &dest_y, &dest_bits, &dest_dx, &dest_dy);
}
pixman_region32_init (&region);
if (!pixman_compute_composite_region32 (
@ -829,36 +890,17 @@ pixman_image_composite32 (pixman_op_t op,
dest_format, dest_flags,
&imp, &func))
{
const pixman_box32_t *pbox;
int n;
pbox = pixman_region32_rectangles (&region, &n);
while (n--)
{
func (imp, op,
src, mask, dest,
pbox->x1 + src_x - dest_x,
pbox->y1 + src_y - dest_y,
pbox->x1 + mask_x - dest_x,
pbox->y1 + mask_y - dest_y,
pbox->x1,
pbox->y1,
pbox->x2 - pbox->x1,
pbox->y2 - pbox->y1);
pbox++;
}
walk_region_internal (imp, op,
src, mask, dest,
src_x, src_y, mask_x, mask_y,
dest_x, dest_y,
width, height,
(src_flags & FAST_PATH_SIMPLE_REPEAT),
(mask_flags & FAST_PATH_SIMPLE_REPEAT),
&region, func);
}
out:
if (need_workaround)
{
unapply_workaround (src, src_bits, src_dx, src_dy);
unapply_workaround (mask, mask_bits, mask_dx, mask_dy);
unapply_workaround (dest, dest_bits, dest_dx, dest_dy);
}
pixman_region32_fini (&region);
}
@ -939,9 +981,12 @@ color_to_pixel (pixman_color_t * color,
format == PIXMAN_x8b8g8r8 ||
format == PIXMAN_b8g8r8a8 ||
format == PIXMAN_b8g8r8x8 ||
format == PIXMAN_r8g8b8a8 ||
format == PIXMAN_r8g8b8x8 ||
format == PIXMAN_r5g6b5 ||
format == PIXMAN_b5g6r5 ||
format == PIXMAN_a8))
format == PIXMAN_a8 ||
format == PIXMAN_a1))
{
return FALSE;
}
@ -960,8 +1005,12 @@ color_to_pixel (pixman_color_t * color,
((c & 0x0000ff00) << 8) |
((c & 0x000000ff) << 24);
}
if (PIXMAN_FORMAT_TYPE (format) == PIXMAN_TYPE_RGBA)
c = ((c & 0xff000000) >> 24) | (c << 8);
if (format == PIXMAN_a8)
if (format == PIXMAN_a1)
c = c >> 31;
else if (format == PIXMAN_a8)
c = c >> 24;
else if (format == PIXMAN_r5g6b5 ||
format == PIXMAN_b5g6r5)
@ -1168,6 +1217,8 @@ pixman_format_supported_source (pixman_format_code_t format)
case PIXMAN_x8b8g8r8:
case PIXMAN_b8g8r8a8:
case PIXMAN_b8g8r8x8:
case PIXMAN_r8g8b8a8:
case PIXMAN_r8g8b8x8:
case PIXMAN_r8g8b8:
case PIXMAN_b8g8r8:
case PIXMAN_r5g6b5:

Просмотреть файл

@ -652,11 +652,13 @@ struct pixman_indexed
#define PIXMAN_TYPE_YUY2 6
#define PIXMAN_TYPE_YV12 7
#define PIXMAN_TYPE_BGRA 8
#define PIXMAN_TYPE_RGBA 9
#define PIXMAN_FORMAT_COLOR(f) \
(PIXMAN_FORMAT_TYPE(f) == PIXMAN_TYPE_ARGB || \
PIXMAN_FORMAT_TYPE(f) == PIXMAN_TYPE_ABGR || \
PIXMAN_FORMAT_TYPE(f) == PIXMAN_TYPE_BGRA)
PIXMAN_FORMAT_TYPE(f) == PIXMAN_TYPE_BGRA || \
PIXMAN_FORMAT_TYPE(f) == PIXMAN_TYPE_RGBA)
/* 32bpp formats */
typedef enum {
@ -666,6 +668,8 @@ typedef enum {
PIXMAN_x8b8g8r8 = PIXMAN_FORMAT(32,PIXMAN_TYPE_ABGR,0,8,8,8),
PIXMAN_b8g8r8a8 = PIXMAN_FORMAT(32,PIXMAN_TYPE_BGRA,8,8,8,8),
PIXMAN_b8g8r8x8 = PIXMAN_FORMAT(32,PIXMAN_TYPE_BGRA,0,8,8,8),
PIXMAN_r8g8b8a8 = PIXMAN_FORMAT(32,PIXMAN_TYPE_RGBA,8,8,8,8),
PIXMAN_r8g8b8x8 = PIXMAN_FORMAT(32,PIXMAN_TYPE_RGBA,0,8,8,8),
PIXMAN_x14r6g6b6 = PIXMAN_FORMAT(32,PIXMAN_TYPE_ARGB,0,6,6,6),
PIXMAN_x2r10g10b10 = PIXMAN_FORMAT(32,PIXMAN_TYPE_ARGB,0,10,10,10),
PIXMAN_a2r10g10b10 = PIXMAN_FORMAT(32,PIXMAN_TYPE_ARGB,2,10,10,10),
@ -843,19 +847,25 @@ void pixman_image_composite32 (pixman_op_t op,
int32_t width,
int32_t height);
/* Old X servers rely on out-of-bounds accesses when they are asked
* to composite with a window as the source. They create a pixman image
* pointing to some bogus position in memory, but then they set a clip
* region to the position where the actual bits are.
/* Executive Summary: This function is a no-op that only exists
* for historical reasons.
*
* There used to be a bug in the X server where it would rely on
* out-of-bounds accesses when it was asked to composite with a
* window as the source. It would create a pixman image pointing
* to some bogus position in memory, but then set a clip region
* to the position where the actual bits were.
*
* Due to a bug in old versions of pixman, where it would not clip
* against the image bounds when a clip region was set, this would
* actually work. So by default we allow certain out-of-bound access
* to happen unless explicitly disabled.
* actually work. So when the pixman bug was fixed, a workaround was
* added to allow certain out-of-bound accesses. This function disabled
* those workarounds.
*
* Fixed X servers should call this function to disable the workaround.
* Since 0.21.2, pixman doesn't do these workarounds anymore, so now this
* function is a no-op.
*/
void pixman_disable_out_of_bounds_workaround (void);
void pixman_disable_out_of_bounds_workaround (void);
/*
* Trapezoids
@ -864,6 +874,7 @@ typedef struct pixman_edge pixman_edge_t;
typedef struct pixman_trapezoid pixman_trapezoid_t;
typedef struct pixman_trap pixman_trap_t;
typedef struct pixman_span_fix pixman_span_fix_t;
typedef struct pixman_triangle pixman_triangle_t;
/*
* An edge structure. This represents a single polygon edge
@ -891,6 +902,10 @@ struct pixman_trapezoid
pixman_line_fixed_t left, right;
};
struct pixman_triangle
{
pixman_point_fixed_t p1, p2, p3;
};
/* whether 't' is a well defined not obviously empty trapezoid */
#define pixman_trapezoid_valid(t) \
@ -946,6 +961,31 @@ void pixman_rasterize_trapezoid (pixman_image_t *image,
const pixman_trapezoid_t *trap,
int x_off,
int y_off);
void pixman_composite_trapezoids (pixman_op_t op,
pixman_image_t * src,
pixman_image_t * dst,
pixman_format_code_t mask_format,
int x_src,
int y_src,
int x_dst,
int y_dst,
int n_traps,
const pixman_trapezoid_t * traps);
void pixman_composite_triangles (pixman_op_t op,
pixman_image_t * src,
pixman_image_t * dst,
pixman_format_code_t mask_format,
int x_src,
int y_src,
int x_dst,
int y_dst,
int n_tris,
const pixman_triangle_t * tris);
void pixman_add_triangles (pixman_image_t *image,
int32_t x_off,
int32_t y_off,
int n_tris,
const pixman_triangle_t *tris);
PIXMAN_END_DECLS

Просмотреть файл

@ -62,9 +62,9 @@ fails-if(d2d) == repeating-linear-1b.html repeating-linear-1-ref.html
== repeating-radial-2a.html repeating-radial-2-ref.html
== twostops-1a.html twostops-1-ref.html
== twostops-1b.html twostops-1-ref.html
fails-if(!d2d) == twostops-1c.html twostops-1-ref.html # bug 524173
fails-if(cocoaWidget) == twostops-1c.html twostops-1-ref.html # bug 524173
== twostops-1d.html twostops-1-ref.html
fails-if(!cocoaWidget&&!d2d) == twostops-1e.html twostops-1-ref.html # bug 524173
== twostops-1e.html twostops-1-ref.html
# from http://www.xanthir.com/:4bhipd by way of http://a-ja.net/newgrad.html
fails-if(Android) == aja-linear-1a.html aja-linear-1-ref.html