CompilerGLSL type_to_glsl() and image_type_glsl() functions support optional object ID.
Add SPIRType::Image::access member to support SPIR-V OpTypeImage access qualifier.
Remove SPIRType::Image::is_read and ::is_written members.
Use DecorationNonReadable and DecorationNonWritable to mark read/write access for image variables.
CompilerMSL emit access qualifiers per image variable, instead of per image type.
CompilerGLSL and CompilerHLSL behaviour is unchanged.
Add bool members is_read and is_written to SPIRType::Image.
Output correct texture read/write access by marking whether textures
are read from and written to by the shader.
Override bitcast_glsl_op() to use Metal as_type<type> functions.
Add implementations of SPIR-V functions inverse(), degrees() & radians().
Map inverseSqrt() to rsqrt().
Map roundEven() to rint().
GLSL functions imageSize() and textureSize() map to equivalent
expression using MSL get_width() & get_height() functions.
Map several SPIR-V integer bitfield functions to MSL equivalents.
Map SPIR-V atomic functions to MSL equivalents.
Map texture packing and unpacking functions to MSL equivalents.
Refactor existing, and add new, image query functions.
Reorganize header lines into includes and pragmas.
Simplify type_to_glsl() logic.
Add MSL test case vert/functions.vert for added function implementations.
Add MSL test case comp/atomic.comp for added function implementations.
test_shaders.py use macOS compilation for MSL shader compilation validations.
WebGL supports lod texture funcs only in fragment
shaders but SPIR-V supports only lod texture funcs
in vertex shaders. This reverts calls which were
forced (infered from using a 0 constant) to use
an lod to plain calls in vertex shaders when
using legacy es.
spirv_msl optionally add padding and packing to allow MSL
struct members to align with SPIR-V struct alignments.
spirv_cross add convenience methods for testing Decorations.
spirv_glsl replace member_decl() function with new emit_stuct_member().
Allow struct member types to be marked as packed via DecorationCPacked decoration.
In some cases, the compiler decided to emit continue block first,
which invalidated the expressions used by the condition.
Parameters to functions can be evaluated in any order which caused
"random" behavior.
To extract a column from row-major matrix, we need to do a strided load one
component at a time. In this case flattened_access_chain_offset still returns
the offset to the first element, but the stride is equal to matrix stride
instead of vector stride.
For this to work, we need to pass matrix stride (and transpose flag) through,
similar to how matrix flattening works.
Additionally slightly clean up recursive flattened_access_chain structure -
specifically, instead of deciding mid-traversal that we need matrix stride
information, we can just pass the matrix stride through - for access chains
that end in matrix/vector this gets us what we need, and for access chains
that end in structs the flattened_access_chain_struct code will recompute
correct stride/transposition data to pass through further.
We currently only support access chains that end in a matrix by propagating
"needs transpose" flag upstream which flips the matrix multiplication order.
It's possible to support indexed extraction as well, however it would have to
generate code like this:
vec4 row = vec4(UBO[0].y, UBO[1].y, UBO[2].y, UBO[3].y);
for a column equivalent of:
vec4 row = UBO[1];
It is definitely possible to do so but it requires signaling the vector output
that it needs to switch to per-component extraction which is a bit more trouble
than this is worth for now.
Instead of filling a std::string buffer passed by reference return a new
string. This may be slightly slower in certain cases but they are pretty
rare and this matches the code style better.
Also streamline error handling in different branches and extract function
to generate vector swizzle.
Legacy GLSL targets do not support uniform buffers, and as such require
some sort of emulation. There are two alternatives - one is to represent
a uniform buffer as a uniform struct, and another one is to flatten it
into an array of primitive vector types (vec4).
Uniform struct have two disadvantages that make using them prohibitive
in some applications:
- The location assignment for struct members is arbitrary which means
the application has to set each struct member one by one
- Some Android drivers fail to link shader programs if both vertex and
fragment shader use the same uniform struct
Because of this, we need to support flattening uniform buffers into an
array. This is not just important for legacy GLSL but also is sometimes
useful for ESSL 3.0 where some Android drivers do not have stable UBO
support.
The way flattening works is the entire buffer is represented as a vec4
array; each access chain is rewritten into a combination of array
accesses, swizzles and data type constructors. Specifically:
- Extracting a vector or a scalar requires indexing into the array with
an optional swizzle, for example CB0[13].yz for reading vec2
- Extracting a matrix or a struct requires extracting each individual
vector or struct member and then combining them into the resulting
object
- Extracting arrays is not supported, mostly because the resulting
construct is very inefficient and ESSL 1.0 does not support array
constructors.
Additionally, while we try to constant-fold each individual indexing
operation, there are cases where we have to use dynamic index
computation (specifically for indexing arrays with non-constants); so
the general form of the primitive array extraction expression is:
buffer[stride0*index0+...+strideN*indexN+offset]
Where stride/offset are integer literals and index represents variables.
- By default, emit uniform structs for UBOs, like push constant.
- Forward transpose information,
and optimize transpose(matrix) * vector to vector * matrix.
CompilerGLSL add to_function_name() and to_function_args() functions to organize
structure of emit_texture_op() function.
CompilerMSL add support for MSL gather(), gather_compare() and sample_compare() functions.
CompilerGLSL builtin_to_glsl() function outputs gl_ClipDistance for BuiltInClipDistance
builtin, and includes builtin code in output when handling unknown builtin code.
CompilerMSL uses type ID instead of type object where appropriate to support array types,
where type.self is not consistent with actual type ID, plus support array stride calc
even when not explicitly set by SPIR-V code.
For output consistency, CompilerMSL prefers use of standard builtin names over specified
names, and output builtins qualified with output struct while in entry function.
Previously, we would generate parentheses proactively when generating
binary ops, however, this leads to uglier code and hits warnings in
compilers when used as a conditional.
The size of an array can be a specialization constant or a spec constant
op. This complicates things quite a lot.
Reflection becomes very painful in the presence of expressions instead
of literals so add a new array which expresses this.
It is unlikely that we will need to do accurate reflection of interface
types which have specialization constant size.
SSBOs and UBOs will for now throw exception if a dynamic size is used since it
is very difficult to know the real size.
Fix some minor missing pieces from C++.
Type remapping like this doesn't seem to fit MSL backend so well, as it
does a lot of remapping internally on its own.
Type name remapping, really is for fringe extension cases in GLSL which
aren't yet supported in SPIR-V.
- Only consider I/O variables if part of OpEntryPoint.
- Keep a safe fallback if #entry-points is 1 to avoid potentially
breaking previously working shaders.
There was a potential problem if variables were invalidated and SPIR-V
read expressions which depended on other expression which in turn depended on the
invalidated variable.
Also fixes issue where variables were considered immutable if they were
forwardable. This allowed some incorrect optimizations to slip through.
This is now fixed in ESSL 3.10 backend of glslang, so we can remove the old workaround
of dropping full memory barriers.
Also fixes unrelated issue which newer glslang detects.
OpName is only for debug information, so we must be very careful that
we do not reuse the same name for different variables.
This was previously done for local variables, but this commit extends
this to global variables as well.
This is supported as compiler extensions, but we can be standards
compliant here, so use the classic workaround of having a single entry
array at the end of the structs instead.
In some cases we need to bitcast when dealing with int vs. uint.
SPIR-V allows inputs to be of different integer signedness, so we need
to deal with this somehow.
Add testing system to test SPIR-V assembly.
For now, test all possible combination for all major cases.
- IAdd (which doesn't care about input type as long as they're equal)
- SDiv/UDiv operations which case about input type.
- Arith/Logical right shifts.
- IEqual to test outputs to bvec, which shouldn't get output cast. Also
tests casting in function-like calls.
Reformats the entire codebase. Better to do it now than later.
Adds .clang-format and a convenience script format_all.sh which formats
everything automatically.
Add virtual CompilerGLSL emit_sampled_image_op function for OpSampledImage.
Under MSL, set sampler ID for local OpSampledImage objects and extract it when emitting sampler.