This change introduces a "github-sync" component into tools,
which aims to support synchronizing both wgpu and WebRender with github.
~~It also features a "cargo test" job for standalone wgpu (bug 1596127)~~
The code is ported from "gfx/wr/scripts/wrupdater" folder. Changes are:
1. remove explicit WR parts and make them configurable by command line params
2. detect "mozilla-xxx" tags and use them in addition to the commits
As a follow up, wrupdater will be removed in favor of github-sync.
Status:
- [x] get the CI test job working
- [x] get @kats to fork "wgpu" github for "moz-gfx" bot
- [x] remove the wgpu testing CI job (into separate PR)
- [x] create new secret and reference it
Differential Revision: https://phabricator.services.mozilla.com/D57057
--HG--
extra : moz-landing-system : lando
We've long handled chunks by defining the total number of chunks in our CI
configuration, and then passing that value down into the test harnesses at task
runtime (via the '--this-chunk' and '--total-chunks' parameters). The test
harness then runs an algorithm to determine which tests should be run in "this"
chunk.
There are several problems with this approach, but by far the biggest is that
we can't use test information in our scheduling algorithms. The information
simply isn't available yet. This patch switches things around such that we
determine which tests go in which tasks during the taskgraph generation. This
means we have perfect information around which tasks are running which tests,
and if e.g a ccov or machine learning algorithm deems a particular test
important, we can make sure to *only* schedule the tasks that contain that
test.
I'm planning to enable this a couple suites at a time so we don't accidentally
stop running tests. This specifically only enables this mode for
'mochitest-media', 'mochitest-browser-chrome' and 'mochitest-devtools-chrome'.
I chose these suites because they are the ones that are already using the
'chunk_by_runtime' algorithm.
Differential Revision: https://phabricator.services.mozilla.com/D52729
--HG--
extra : moz-landing-system : lando
Right now toolchain tasks that aren't dependencies of requested tasks
will run in response to source code changes only if the files are in the
sparse profile. That is, taskgraph calculates the digest based on just
the files in those directories that are in the sparse profile, and will
rebuild it when those files change. This changes ensures that more
geckodriver files are used to calculate the digest, and thus changes
to them will cause geckodriver to be rebuilt.
As more things depend on the geckodriver toolchain tasks directly this
becomes less valuable and can be removed from the sparse profile.
Differential Revision: https://phabricator.services.mozilla.com/D47459
--HG--
extra : moz-landing-system : lando
In browsertime.zip we should have:
browsertime/
package.json
package-lock.json
node_modules/
.bin/
browsertime -> ../browsertime/bin/browsertime.js
browsertime/
...
The idea is that we'll fetch browsertime.zip in a generic-worker
environment and be able to run Node.js from within the top level
browsertime/ directory.
Differential Revision: https://phabricator.services.mozilla.com/D38773
--HG--
extra : moz-landing-system : lando
This was originally from bug 1528374 for Mac PGO, but that isn't able to
land yet and it should help Windows PGO runs in the meantime.
Differential Revision: https://phabricator.services.mozilla.com/D37807
--HG--
extra : moz-landing-system : lando
In fact, "simply" use whatever python configure does to find a shell to
execute config.guess and config.sub, and get both the mozconfig content
and the real, canonicalized target alias. This has the side effect of
making builds with --target=$cpu use a complete obj-$cpu-$os default
objdir instead of obj-$cpu. This will also allow to change the
host-guessing logic without having to duplicate code.
Differential Revision: https://phabricator.services.mozilla.com/D17618
--HG--
extra : moz-landing-system : lando
Although this task technically doesn't build a toolchain, the set of
steps it needs to do is very similar to what a toolchain build does, so
we're shoehorning this task into the toolchain kind. The task basically
runs `cargo vendor` on the gfx/wr/Cargo.lock file (if/when it changes)
and exports a tarball of the resulting vendored crates. This allows
downstream tasks that build stuff in gfx/wr to not have to re-fetch
these crates from crates.io on every test run.
Differential Revision: https://phabricator.services.mozilla.com/D14406
--HG--
extra : moz-landing-system : lando
for L10n jobs should run per-push based on the corresponding builds
Differential Revision: https://phabricator.services.mozilla.com/D1403
--HG--
extra : rebase_source : b7b7f34204bd06ba2baf92ae6d103f8c207214fb
for L10n jobs should run per-push based on the corresponding builds
Differential Revision: https://phabricator.services.mozilla.com/D1403
--HG--
extra : rebase_source : fc669df941b70b720a6eb4118ad36d6679a28d48
Since it is run with `mach python`, it uses the environment defined by
`build/virtualenv_packages.txt`. Thus, we don't need to create a separate
virtualenv.
Differential Revision: https://phabricator.services.mozilla.com/D1015
--HG--
extra : rebase_source : 023095f82d7219a10dffb3579276c5285db37cfc
extra : source : a2999a5b9b7aa08a7c5c2ba6384724853d14b9c5