This is useful for limited whitebox testing that inspects the assembly
output. Ask me about details (s-s).
The code, adding a thread-local, is not completely pretty. This
seemed like a better fix than adding the plumbing for a more elaborate
callback mechanism (basically we need to pass a context argument down
through the disassembly pipeline and adapt all the disassemblers),
though I'm open for discussion on this point.
Differential Revision: https://phabricator.services.mozilla.com/D80819
When meta-viewport support is enabled, the call to UpdateResolutionForFirstPaint
from RefreshViewportSize is followed by a call to ShrinkToDispalySizeIfNeeded,
which calls UpdateResolutionForContentSizeChange and uses the scrollable rect
size for the intrinsic scale computation. So the intrinsic scale computation
in UpdateResolutionForFirstPaint causes a transient state where the resolution
and visual viewport size is wrong. It is corrected immediately after, but
changing the visual viewport size like that ends up marking frames dirty for
reflow. Avoiding the transient state avoids those reflows, which is a nice
optimization.
Differential Revision: https://phabricator.services.mozilla.com/D80996
Talos tests suffer the most from intermittents that seem due to the Base Profiler.
So until symptoms are reduced (bug 1648324) or the root cause is fixed (bug 1648325), Talos tests will run without the Base Profiler.
Differential Revision: https://phabricator.services.mozilla.com/D81019
The Base Profiler is still recent and barely used, so it may contain some bugs.
With bug 1586939, the Base Profiler is now used more often because it is controlled the same way as the Gecko Profiler.
This has surfaced some intermittent issues, which pollute existing tests.
Until the root cause is found (see bug 1648325), setting `MOZ_PROFILER_STARTUP_NO_BASE=1` prevents the Base Profiler from running. This may be used where problems are visible, to diagnostic them and/or reduce them where needed.
Differential Revision: https://phabricator.services.mozilla.com/D81018
Table::create() returns false when allocation fails, but does
not report the OOM. This commit adds appropriate calls to
ReportOutOfMemory around calls to Table::create().
Differential Revision: https://phabricator.services.mozilla.com/D80754
The spec no longer has an initial memory size limit, so we should allow up to
the limit imposed upon us by ArrayBuffer.
Differential Revision: https://phabricator.services.mozilla.com/D80140
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an initial limit. The code is
updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80139
This commit separates the validation limit defined by the core spec and the
implementation runtime limit defined by the JS-API spec. The exception
generated for the runtime check is a WebAssembly.RuntimeError as decided upon
upstream.
Additionally, there is no longer the concept of an 'initial' limit, so code
is updated to reflect this.
Differential Revision: https://phabricator.services.mozilla.com/D80138
Data and element segment decoding used the initial memory/table limit values
which will be removed in a later commit, this commit gives data/elem segments
their own implementation defined limit to prevent interference.
Differential Revision: https://phabricator.services.mozilla.com/D80137
There are two kinds of limits to table and memory types. A limit applied during
validation defined by the core spec, and a limit applied during instantiation
by the JS-API embedders spec. Our current implementation combines both and
applies them at validation time.
The validation limit is 2^16 for memory and 2^32-1 for tables.
(actually the limit for tables is 2^32 in the core spec and fixed to 2^32-1 in
the reference-types proposal, which should be merged soon)
Unfortunately, memory types are defined in pages and our implementation
eagerly converts from pages to bytes upon decoding. This allows for an overflow
when the maximum memory size is used (2^16 * PageSize == UINT32_MAX + 1).
Additionally, the memory64 proposal is planning on widening the encoding of
limit values to u64, which will put us in this same situation also for tables.
(the current memory64 validation limit for 64-bit memories is 2^48 which
would overflow even uint64_t when converted to bytes, so we may want to lobby
for a lower limit there)
This commit widens the fields of Limits to uint64_t to make overflow impossible
in these situations and (less importantly) to get ready for memory64.
The wasm array buffer code stores the runtime value of maximum length and was
widened. We don't need to change the byte length representation because our
implementation limit will fail memory allocations well before UINT32_MAX.
The table code used Limits in TableDesc and was re-written to store just the
initial/maximum values in 32-bit form so that no table code has to worry
about 64-bit values. This has the small nice effect of dropping the shared
flag, which was never used and won't be in the future.
There should be no functional changes in this commit.
Differential Revision: https://phabricator.services.mozilla.com/D80136
The added test provides extra coverage for loading the legacy version of the cache file. It does not cover the exact errors as we don't have coverage to catch those at the moment.
Differential Revision: https://phabricator.services.mozilla.com/D80455