emit call results into potentially aliased slots. This allows us
to properly mark indirect return slots as noalias, at the cost
of requiring an extra memcpy when assigning an aggregate call
result into a l-value. It also brings us into compliance with
the x86-64 ABI.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@138599 91177308-0d34-0410-b5e6-96231b3b80d8
Note that because we don't usually touch the MMX registers anyway, all -mno-mmx needs to do is tweak the x86-32 calling convention a little for vectors that look like MMX vectors, and prevent the definition of __MMX__.
clang doesn't actually stop the user from using MMX inline asm operands or MMX builtins in -mno-mmx mode; as a QOI issue, it would be nice to diagnose, but I doubt it really matters much.
<rdar://problem/9694837>
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@134770 91177308-0d34-0410-b5e6-96231b3b80d8
code generator will do it. With this patch, clang compiles the example
in PR9794 to not have an alloca temporary.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@131881 91177308-0d34-0410-b5e6-96231b3b80d8
with a non-default-stack-ABI-alignment (of 16).
- This fixes the ABI convenient, but breaks codegen since we now have
underaligned arguments. Marginal improvement overall though, and will be
fixed in next commit.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@114113 91177308-0d34-0410-b5e6-96231b3b80d8