Now we have the following 5 public methods, and the consumer should call one of them, depending on the target:
* initForGlobal
* initForStandaloneFunction
* initForEval
* initForModule
* initFromLazy
Depends on D88216
Differential Revision: https://phabricator.services.mozilla.com/D88217
CompilationState is used from both Parser and BytecodeEmitter,
so SourceAwareCompiler should have it, and hide CompilationState from the API
consumers (except the place that directly calls Parser).
Depends on D88215
Differential Revision: https://phabricator.services.mozilla.com/D88216
The output of the compilation is input + stencil, and CompilationState is
purely the internal data.
Depends on D88213
Differential Revision: https://phabricator.services.mozilla.com/D88214
CompilationGCOutput should have different lifetime than input + stencil.
CompilationGCOutput can even be allocated in different thread than the
compilation (off-thread compilation will instantiate stencil in main thread).
Depends on D88208
Differential Revision: https://phabricator.services.mozilla.com/D88209
This patch just adds 4 structs to categolize CompilationInfo fields.
Later patches will simplify methods and consumers.
Depends on D88204
Differential Revision: https://phabricator.services.mozilla.com/D88205
ParserAtom now holds either AtomIndex that is an index into
CompilationInfo.atoms, or WellKnownAtomId that maps to cx->names() fields.
ParserAtoms in WellKnownParserAtoms holds WellKnownAtomId,
and ParserAtoms in CompilationInfo.parserAtoms holds AtomIndex.
GetWellKnownAtom relies on the struct layout of JSAtomState, to
quickly map WellKnownParserAtoms to its field.
Differential Revision: https://phabricator.services.mozilla.com/D88203
Instead, rename funcData to scriptData and reserve index 0 for top-level. In
non-function script cases, we have to explicitly reserve the ScriptStencil
because this was previously only done by the FunctionBox code.
Differential Revision: https://phabricator.services.mozilla.com/D87068
The stencil data structures no longer interact directly with the GC and
remaining tracing code can be removed. The relevant vectors in the
CompilationInfo can also be simplified as a result.
Differential Revision: https://phabricator.services.mozilla.com/D88079
When zooming, webrender overrides the raster space used to render text, so that
we do not expensively rerasterize the glyphs for every fractional change in zoom
level. Previously we chose to do this when any ancestor of the picture's spatial
node was being zoomed. This worked on most pages, because the scroll frame which
is used as the main picture caching slice is a descendent of the zooming
reference frame.
However, on pages without a scrollable frame, or for fixed-position content, the
picture's spatial node will not be a descendent of the zooming reference
frame. This meant that we did not detect that we were zooming and rendered the
text in screen raster space rather than the overridden local space, leading to
poor zooming performance.
To fix this, check whether the primitive's spatial node (rather than the
picture's) is a descendent of the zooming frame.
Differential Revision: https://phabricator.services.mozilla.com/D88474
This is a fork of the release-partner-repack-beetmover kind and transform. It's modified to cope with having one upstream task with many partner builds, rather than a many beetmover tasks dealing with a single config-platform-locale combination.
Differential Revision: https://phabricator.services.mozilla.com/D87730
A single task is created to do all partner attributions. The partner_attribution transform processes the configuration into an environment variable for the tools/attribution/attribute.py script to use. This is quite verbose so a large number of configurations may cause problems.
Applies the same priority modification to attribution tasks as to partner repacks, to not impede the main part of the graph.
Differential Revision: https://phabricator.services.mozilla.com/D87729
The partner attribution config is stored in the same repository as the repo manifest for partner repacks, but all in attribution_config.yml instead of default.xml. This extends the existing support for using the Github API to read files to retrieve and process the attribution config.
Differential Revision: https://phabricator.services.mozilla.com/D87728
Renames the release_enable_partners parameter to release_enable_partner_repack, and adds release_enable_partner_attribution for attribution. This it to provide support for disabling them independently in main releases, and in respins.
Adds docs for attribution, update docs for repacks.
Hardwire values for the enable params for the respin flavors, other wise read from the input (defaulting to on in promotion, off otherwise).
Fixes up the rebuild-kinds for partner repacks so that they reflect the current set, although the top level may be all that is needed.
Differential Revision: https://phabricator.services.mozilla.com/D87727
This improves the integrity of downloads of upstream artifacts when using fetch-content. If `verify-hash: True` is set on the fetch config, then the chain-of-trust.json of the upstream is used to retieve the expected sha256 of the artifact, and this is checked.
Differential Revision: https://phabricator.services.mozilla.com/D87725
Prior to this patch the task graph would always include a release-partner-repack-<platform> task for all 6 platforms, regardless of what was specified in release_partner_config. This was particularly obvious in the off-cycle respin scenario when a single partner is repacked. By moving and reusing get_repack_ids_by_platform() it's easy to skip unneeded platforms.
Differential Revision: https://phabricator.services.mozilla.com/D87724
It'll open a window / tab that they can then close. This allows fuzzers
to catch crashes in our printing codepaths. Can't wait for the fun to
start.
Differential Revision: https://phabricator.services.mozilla.com/D88477
There are various problems happening when dealing with the output from
setup.py during virtualenv setup, all of which step from the process
command output not being a unicode string in python.
As this code is still used to setup python2 virtualenv, we need to use
the backwards-compatible universal_newlines=True trick.
Differential Revision: https://phabricator.services.mozilla.com/D88372
We need a sync IPC call for this because otherwise the number of smaller sync messages we would need to call would be variable.
Differential Revision: https://phabricator.services.mozilla.com/D88076
This, hopefully, begins to address an ongoing global problem where we have few, if any, insights into the performance of individual build tasks (compilations, calls into Python scripts, etc.) At most we have aggregated statistics about how long tiers last, combined with `sccache` aggregates across the entire build (which don't cover non-compilation tasks). This has a few implications:
1. It's impossible to identify bottlenecks, except by going out of your way to notice and reproduce them. e.g. no one, to my knowledge, was aware that `make_dafsa.py` was a bottleneck until someone happened to notice and report it in bug 1629337. We could have systems that automatically detect this sort of thing, or at least that make it easier to do so than by CTRL-C'ing in the middle of the build several times to try to reproduce the problem.
2. It's impossible to detect regressions, unless the regression is so pronounced and severe that it has an immediate impact on the overall build time and triggers build time alerts.
3. It's impossible to identify that you have *fixed* regressions, except by doing ad-hoc timing measurements by building individual `make` targets. This is error-prone and annoying.
Here we propose a low-friction system wherein individual build tasks log their build own perf info. For now, that's a write to `stdout` consisting of the string `BUILDTASK ` followed by a simple JSON object with a start time, end time, the `argv` of the task, and an additional `"context"` key (I anticipate this could be used to annotate the task with relevant per-task for later aggregation, for example: was this an `sccache` cache hit or not? For now, it's empty everywhere). The build controller then collects this data, validates it, and writes out the entire list of build tasks as a JSON file after the build has completed, similarly to what we already do with `build_resources.json`. We already parse some `make` output to do stuff like tracking when we switch tiers, so this isn't a huge architectural shift or anything.
In my opinion this "should" happen at the build system, or `make`, level, but `make` doesn't expose anything resembling this information to my knowledge, so this has to be implemented outside of `make`. One could implement something like this at the `sccache` level but that doesn't touch anything but C/C++/Rust compilation tasks; an ideal solution would support other generic build tasks. We could also fork `make` to add this feature ourselves, but for several reasons I don't think that's tractable. :)
Of course, this approach has downsides:
1. We depend on parsing the `stdout` of `make`, and processes can unfortunately sometimes trample on each other, leading to data loss for individual build tasks occasionally. This is a necessary limitation of the model to my knowledge, and I don't know that it can be fixed generally. In my testing, not much data tends to be lost usually.
2. Dumping arbitrary data to `stdout` isn't always possible or desirable. If you're not careful about it this can also result in noisier-than-necessary tasks, especially when those tasks are not invoked by a parent process that knows how to handle the special `BUILDTASK` lines.
3. This data is raw enough where aggregation is not completely trivial.
4. This functionality has to be added for any new kind of build task whose performance we'd like to track; it doesn't come "for free" due to not being able to be implemented at the build system level.
5. The data isn't awfully small due to the `argv`'s (at this point, not nearly big enough where we need to be concerned about it IMO, but maybe that will change in the future?)
One can imagine a couple other architectures that could avoid the first two problems, namely: 1) we could use a "real" database that would not dump info to `stdout` and wouldn't lose data, like `sqlite3`; or, 2) we could set up another server, similar to `sccache`, that collects this data from subprocesses and aggregates it, making sure not to lose any along the way. Both of these have enough overhead, in terms of engineering effort or actual impact on latency, where I dont know that they make any sense to even attempt implementing. The remaining continue to be real issues, however.
After this is landed there are a few ways forward. We can start uploading these files as build artifacts in CI to allow us to reason about performance impacts of changes in `central`. We can easily add this functionality to the `sccache` client to start tracking those builds as well. We already have a very simple visualization of build tier timing in `mach resource-usage`; we could join that data against the `BUILDTASK` data to produce a very clear visualization of build bottlenecks, i.e., "why is the `export` tier taking so long", etc.
Differential Revision: https://phabricator.services.mozilla.com/D80284
This changes most of our automation builds to clang 11.0.0 rc2.
Not included:
* code coverage builds, per bug 1660341
* mingw builds, which have traditionally been on their own update cadence, and in this case are blocked anyway by bug 1658632
This will leave some unused clang-9 task definitions. I intend to clean them up, but at a later date. For now I want to focus on making sure this update sticks, since patches like this have a tendency to bounce.
Differential Revision: https://phabricator.services.mozilla.com/D88313
A few test cases fail under clang-11 on Linux debug builds only. As described in the bug, we unfortunately don't have the bandwidth to investigate, so this patch accepts the failures.
Differential Revision: https://phabricator.services.mozilla.com/D88363
For not-well-understood reasons, ld's `--gc-sections` discards a large number of the PGO bookkeeping structures that enable us to keep track of function counters, and the effect gets worse in object files generated by clang-10.
As much as I'd like to understand this better, the investigations take way too much time. As a path of least resistance, we can disable `--gc-sections` for the instrumentation phase of PGO builds. It won't harm anything since users never see those builds, and it will improve the performance of the optimized phase greatly.
Differential Revision: https://phabricator.services.mozilla.com/D78112
Some geckoview tests require gradient usage. Since background
images are async, these tests would wait for a contentful paint
to make sure the images are decoded before running the assertions.
This causes an issue because gradient-only backgrounds aren't
contentful anymore according to the latest spec.
We fix the tests by adding a transparent gif to the background
image list to trick the contentful detection.
Differential Revision: https://phabricator.services.mozilla.com/D88230