This is useful for limited whitebox testing that inspects the assembly
output. Ask me about details (s-s).
The code, adding a thread-local, is not completely pretty. This
seemed like a better fix than adding the plumbing for a more elaborate
callback mechanism (basically we need to pass a context argument down
through the disassembly pipeline and adapt all the disassemblers),
though I'm open for discussion on this point.
Differential Revision: https://phabricator.services.mozilla.com/D80819
Table::create() returns false when allocation fails, but does
not report the OOM. This commit adds appropriate calls to
ReportOutOfMemory around calls to Table::create().
Differential Revision: https://phabricator.services.mozilla.com/D80754
The spec no longer has an initial memory size limit, so we should allow up to
the limit imposed upon us by ArrayBuffer.
Differential Revision: https://phabricator.services.mozilla.com/D80140
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an initial limit. The code is
updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80139
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an 'initial' limit, so code
is updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80138
Data and element segment decoding used the initial memory/table limit values
which will be removed in a later commit, this commit gives data/elem segments
their own implementation defined limit to prevent interference.
Differential Revision: https://phabricator.services.mozilla.com/D80137
There are two kinds of limits to table and memory types. A limit applied during
validation defined by the core spec, and a limit applied during instantiation
by the JS-API embedders spec. Our current implementation combines both and
applies them at validation time.
The validation limit is 2^16 for memory and 2^32-1 for tables.
(actually the limit for tables is 2^32 in the core spec and fixed to 2^32-1 in
the reference-types proposal, which should be merged soon)
Unfortunately, memory types are defined in pages and our implementation
eagerly converts from pages to bytes upon decoding. This allows for an overflow
when the maximum memory size is used (2^16 * PageSize == UINT32_MAX + 1).
Additionally, the memory64 proposal is planning on widening the encoding of
limit values to u64, which will put us in this same situation also for tables.
(the current memory64 validation limit for 64-bit memories is 2^48 which
would overflow even uint64_t when converted to bytes, so we may want to lobby
for a lower limit there)
This commit widens the fields of Limits to uint64_t to make overflow impossible
in these situations and (less importantly) to get ready for memory64.
The wasm array buffer code stores the runtime value of maximum length and was
widened. We don't need to change the byte length representation because our
implementation limit will fail memory allocations well before UINT32_MAX.
The table code used Limits in TableDesc and was re-written to store just the
initial/maximum values in 32-bit form so that no table code has to worry
about 64-bit values. This has the small nice effect of dropping the shared
flag, which was never used and won't be in the future.
There should be no functional changes in this commit.
Differential Revision: https://phabricator.services.mozilla.com/D80136
This commit enable reference-types by default. The existing config/ifdef'ery is
spared to allow for an easier backout and to support Cranelift until it gains
support for the feature.
Depends on D81012
Differential Revision: https://phabricator.services.mozilla.com/D81013
This commit enables the bulk-memory-operations feature by default. The config/
ifdef'ery is spared to allow for easier backouts if needed.
Differential Revision: https://phabricator.services.mozilla.com/D81012
Table::create() returns false when allocation fails, but does
not report the OOM. This commit adds appropriate calls to
ReportOutOfMemory around calls to Table::create().
Differential Revision: https://phabricator.services.mozilla.com/D80754
The spec no longer has an initial memory size limit, so we should allow up to
the limit imposed upon us by ArrayBuffer.
Differential Revision: https://phabricator.services.mozilla.com/D80140
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an initial limit. The code is
updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80139
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an 'initial' limit, so code
is updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80138
Data and element segment decoding used the initial memory/table limit values
which will be removed in a later commit, this commit gives data/elem segments
their own implementation defined limit to prevent interference.
Differential Revision: https://phabricator.services.mozilla.com/D80137
There are two kinds of limits to table and memory types. A limit applied during
validation defined by the core spec, and a limit applied during instantiation
by the JS-API embedders spec. Our current implementation combines both and
applies them at validation time.
The validation limit is 2^16 for memory and 2^32-1 for tables.
(actually the limit for tables is 2^32 in the core spec and fixed to 2^32-1 in
the reference-types proposal, which should be merged soon)
Unfortunately, memory types are defined in pages and our implementation
eagerly converts from pages to bytes upon decoding. This allows for an overflow
when the maximum memory size is used (2^16 * PageSize == UINT32_MAX + 1).
Additionally, the memory64 proposal is planning on widening the encoding of
limit values to u64, which will put us in this same situation also for tables.
(the current memory64 validation limit for 64-bit memories is 2^48 which
would overflow even uint64_t when converted to bytes, so we may want to lobby
for a lower limit there)
This commit widens the fields of Limits to uint64_t to make overflow impossible
in these situations and (less importantly) to get ready for memory64.
The wasm array buffer code stores the runtime value of maximum length and was
widened. We don't need to change the byte length representation because our
implementation limit will fail memory allocations well before UINT32_MAX.
The table code used Limits in TableDesc and was re-written to store just the
initial/maximum values in 32-bit form so that no table code has to worry
about 64-bit values. This has the small nice effect of dropping the shared
flag, which was never used and won't be in the future.
There should be no functional changes in this commit.
Differential Revision: https://phabricator.services.mozilla.com/D80136
I still don't know why this is happening, but this should make the problem a release mode assertion failure rather than an illegal memory access.
Differential Revision: https://phabricator.services.mozilla.com/D80866
This is necessary to fix some jit-test timeouts from bailout/recompile loops
with the patch to bail out for cold Baseline ICs. We're also hitting the bounds
check case on Octane-pdfjs already where it's slowing us down.
Hopefully these are the only cases we need to handle. For shape guards and other
guards derived from Baseline ICs I want to make other changes soon.
Eventually the bailout code should be made more obviously-correct but this should
do for now until IonBuilder is gone.
Differential Revision: https://phabricator.services.mozilla.com/D80648
FinalizationRegistrationsObject only holds a vector of weak pointers to FinalizationRecordObjects, but it still needs a trace method so that those weak pointers can get updated by a moving GC.
Differential Revision: https://phabricator.services.mozilla.com/D80630
We have a clang-plugin checker that wants to favor `SprintfLiteral` over `snprintf`, but for some reason it didn't catch this instance prior to clang-11.
Differential Revision: https://phabricator.services.mozilla.com/D80668
We have a clang-plugin checker that wants to favor `SprintfLiteral` over `snprintf`, but for some reason it didn't catch this instance prior to clang-11.
Differential Revision: https://phabricator.services.mozilla.com/D80667
For regular calls, the combination of the system ABI and softFP is forbidden
because the code that handles multi-value returns does not contain the
special-case workarounds that are needed to move GPR results into FPRs.
Assert that we're not running into this.
Differential Revision: https://phabricator.services.mozilla.com/D80510
This also means we can simplify ArenaCellIter is it doesn't need to support reset() any more. I had to rename the getCell/get methods returning TenuredCell*/T* to get/as to make this work.
I also changed use of |ArenaCellIter i| to |ArenaCellIter cell|, like we do for ZonesIter.
Differential Revision: https://phabricator.services.mozilla.com/D80486