Errors during async generator operations can close the generator but leave
entries in the Debugger::generatorFrames map. This trips up asserts in the
single-step code. Since a closed generator will not match the targettedScript
we simply ignore such entries while checking the assert.
Differential Revision: https://phabricator.services.mozilla.com/D81552
Prior to this patch set, XPConnect always created the first compartment in the
system zone with a content principal. The subsequent patches make that
global's creation lazy, which leads us to create the first compartment in the
system zone with the system principal and the NewCompartmentInSystemZone
specifier. In that case, we call `setIsSystemZone()` when we create the zone,
because the compartment has the system principal, and then call it again when
we try to store it in `rt->gc.systemZone`, which leads to a failed assertion.
This patch fixes that.
Differential Revision: https://phabricator.services.mozilla.com/D79718
This annotation better describes the intent, and clang `-Wvexing-parse`
captures the erroneous use of this class that `MOZ_GUARD_OBJECT*` was
attempting to catch.
Differential Revision: https://phabricator.services.mozilla.com/D81351
I disable this with an error message and a comment, but do not remove it,
because it is currently buggy and unsupported but may well make a comeback
when we have time to clean it up and we see whether wasm SIMD on the web
starts demanding it.
It is of course possible that we will never do this in Ion but will wait
for Cranelift, but we don't have to make that decision today.
Differential Revision: https://phabricator.services.mozilla.com/D81293
This analysis is mostly useful for Warp without TI. We try to infer a known JSClass based
on MIR node types. There is very basic support for phis that is enough to handle some
if-else patterns where both sides produce the same object class.
To make sure the inferred JSClass is correct I am also adding an MAssertClass instruction
that asserts the class is correct during run-time.
Differential Revision: https://phabricator.services.mozilla.com/D80507
This is useful for limited whitebox testing that inspects the assembly
output. Ask me about details (s-s).
The code, adding a thread-local, is not completely pretty. This
seemed like a better fix than adding the plumbing for a more elaborate
callback mechanism (basically we need to pass a context argument down
through the disassembly pipeline and adapt all the disassemblers),
though I'm open for discussion on this point.
Differential Revision: https://phabricator.services.mozilla.com/D80819
Table::create() returns false when allocation fails, but does
not report the OOM. This commit adds appropriate calls to
ReportOutOfMemory around calls to Table::create().
Differential Revision: https://phabricator.services.mozilla.com/D80754
The spec no longer has an initial memory size limit, so we should allow up to
the limit imposed upon us by ArrayBuffer.
Differential Revision: https://phabricator.services.mozilla.com/D80140
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an initial limit. The code is
updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80139
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an 'initial' limit, so code
is updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80138
Data and element segment decoding used the initial memory/table limit values
which will be removed in a later commit, this commit gives data/elem segments
their own implementation defined limit to prevent interference.
Differential Revision: https://phabricator.services.mozilla.com/D80137
There are two kinds of limits to table and memory types. A limit applied during
validation defined by the core spec, and a limit applied during instantiation
by the JS-API embedders spec. Our current implementation combines both and
applies them at validation time.
The validation limit is 2^16 for memory and 2^32-1 for tables.
(actually the limit for tables is 2^32 in the core spec and fixed to 2^32-1 in
the reference-types proposal, which should be merged soon)
Unfortunately, memory types are defined in pages and our implementation
eagerly converts from pages to bytes upon decoding. This allows for an overflow
when the maximum memory size is used (2^16 * PageSize == UINT32_MAX + 1).
Additionally, the memory64 proposal is planning on widening the encoding of
limit values to u64, which will put us in this same situation also for tables.
(the current memory64 validation limit for 64-bit memories is 2^48 which
would overflow even uint64_t when converted to bytes, so we may want to lobby
for a lower limit there)
This commit widens the fields of Limits to uint64_t to make overflow impossible
in these situations and (less importantly) to get ready for memory64.
The wasm array buffer code stores the runtime value of maximum length and was
widened. We don't need to change the byte length representation because our
implementation limit will fail memory allocations well before UINT32_MAX.
The table code used Limits in TableDesc and was re-written to store just the
initial/maximum values in 32-bit form so that no table code has to worry
about 64-bit values. This has the small nice effect of dropping the shared
flag, which was never used and won't be in the future.
There should be no functional changes in this commit.
Differential Revision: https://phabricator.services.mozilla.com/D80136
This commit enable reference-types by default. The existing config/ifdef'ery is
spared to allow for an easier backout and to support Cranelift until it gains
support for the feature.
Depends on D81012
Differential Revision: https://phabricator.services.mozilla.com/D81013
This commit enables the bulk-memory-operations feature by default. The config/
ifdef'ery is spared to allow for easier backouts if needed.
Differential Revision: https://phabricator.services.mozilla.com/D81012
Table::create() returns false when allocation fails, but does
not report the OOM. This commit adds appropriate calls to
ReportOutOfMemory around calls to Table::create().
Differential Revision: https://phabricator.services.mozilla.com/D80754
The spec no longer has an initial memory size limit, so we should allow up to
the limit imposed upon us by ArrayBuffer.
Differential Revision: https://phabricator.services.mozilla.com/D80140
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an initial limit. The code is
updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80139
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an 'initial' limit, so code
is updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80138
Data and element segment decoding used the initial memory/table limit values
which will be removed in a later commit, this commit gives data/elem segments
their own implementation defined limit to prevent interference.
Differential Revision: https://phabricator.services.mozilla.com/D80137
There are two kinds of limits to table and memory types. A limit applied during
validation defined by the core spec, and a limit applied during instantiation
by the JS-API embedders spec. Our current implementation combines both and
applies them at validation time.
The validation limit is 2^16 for memory and 2^32-1 for tables.
(actually the limit for tables is 2^32 in the core spec and fixed to 2^32-1 in
the reference-types proposal, which should be merged soon)
Unfortunately, memory types are defined in pages and our implementation
eagerly converts from pages to bytes upon decoding. This allows for an overflow
when the maximum memory size is used (2^16 * PageSize == UINT32_MAX + 1).
Additionally, the memory64 proposal is planning on widening the encoding of
limit values to u64, which will put us in this same situation also for tables.
(the current memory64 validation limit for 64-bit memories is 2^48 which
would overflow even uint64_t when converted to bytes, so we may want to lobby
for a lower limit there)
This commit widens the fields of Limits to uint64_t to make overflow impossible
in these situations and (less importantly) to get ready for memory64.
The wasm array buffer code stores the runtime value of maximum length and was
widened. We don't need to change the byte length representation because our
implementation limit will fail memory allocations well before UINT32_MAX.
The table code used Limits in TableDesc and was re-written to store just the
initial/maximum values in 32-bit form so that no table code has to worry
about 64-bit values. This has the small nice effect of dropping the shared
flag, which was never used and won't be in the future.
There should be no functional changes in this commit.
Differential Revision: https://phabricator.services.mozilla.com/D80136