This change includes a few fixes to other issues in mach file-info, such
as error handling for invalid paths not working properly.
--HG--
extra : rebase_source : fca43650678d944730273353a5a9154a63247c58
extra : histedit_source : 314e7702ca061005525bd4e054360f79db68935b
Now that we have a mechanism for defining file-based metadata, let's add
a mach command to interface with it.
Currently, we limit ourselves to simple Bugzilla data dumping. Features
will be added over time.
--HG--
extra : rebase_source : 20fa72cac6e6d5aff7973b7dd200c03c8f238639
The Files sub-context allows us to attach metadata to files based on
pattern matching rules.
Patterns are matched against files in a last-write-wins fashion.
The sub-context defines the BUG_COMPONENT variable, which is a 2-tuple
(actually a named tuple) defining the Bugzilla product and component for
files. There are no consumers yet. But an eventual use case will be to
suggest a bug component for a patch/commit. Another will be to
automatically suggest a bug component for a failing test.
--HG--
extra : rebase_source : 9489738136d929a53db7f54bbe6acf3186e0a47c
We want the ability to read data from any moz.build file without needing
a full build configuration (running configure). This will enable tools
to consume metadata by merely having a copy of the source code and
nothing more.
This commit creates the EmptyConfig object. It is a config object that -
as its name implies - is empty. It will be used for reading moz.build
files in "no config" mode.
Many moz.build files make assumptions that variables in CONFIG are
defined and that they are strings. We create the EmptyValue type that
behaves like an empty unicode string. Since moz.build files also do some
type checking, we carve an exemption for EmptyValue, just like we do for
None.
We add a test to verify that reading moz.build files in "no config" mode
works. This required some minor changes to existing moz.build files to
make them work in the new execution mode.
--HG--
extra : rebase_source : f701417f83dfa4e196e39182f8d0a6fea46c6fbb
extra : source : af07351bf2d6e85293ae3edf0fe4ae6cbc0ce246
Building on top of the API to retrieve relevant moz.build files for a
given path, we introduce a moz.build reading API that reads all
moz.build files relevant to a given set of paths. We plan to use this
new API to read metadata from moz.build files relevant to a set of
files.
This patch changes the generator behavior of read_mozbuild to emit the
main context before any processing occurs. This allows downstream
consumers to manipulate state of the context before things like
directory processing occurs. We utilize this capability in the new
reading API to forcefully declare the directory traversal order for
processed moz.build files, overriding DIRS and similar variables.
Since variable exporting doesn't work reliably in this new traversal
mode, variable exporting no-ops when this mode is activated.
--HG--
extra : rebase_source : aa5c67f275e8eb83516dc97648cc8fea62b52815
extra : source : 6c44edc8208a54a9d5d830266cded7b409a776e1
Currently, MozSandbox assumes that the FUNCTIONS, SPECIAL_VARIABLES, and
SUBCONTEXTS data structures are the instances that should be associated
with the sandbox. As we introduce new moz.build processing modes that
wish to change processing behavior, it is necessary for them to have
control over these special symbols.
This patch moves the declaration of these types to the special metadata
dictionary which is inherited during recursion. The "read_topsrcdir" API
now explicitly passes the initial metadata into "read_mozbuild".
--HG--
extra : rebase_source : 59ad9ac7c288a5b610d7e3b513aaa0f5b2ec2009
extra : source : ed135df395751194bf379584a4d210f14ac849b4
An upcoming patch introduces a use case for a strongly typed named
tuple. So, we introduce a generic factory function that can produce these
types.
--HG--
extra : rebase_source : 7f4d17ff28925fbe8d850c036605aa03a38f0ef2
extra : source : acdd5491f10ecf8ea4e1a14150f9a2e282e2cf5d
We have an eventual goal to store file-level metadata in moz.build files
and to have this metadata "cascade" down directory hierarchies. e.g.
metadata in the root directory will apply to all children directories.
A prerequisite for this feature is a way to query which moz.build files
are relevant to a given file. In this patch, we implement an API that
returns this information.
--HG--
extra : rebase_source : 70726926aeddf89df27c46dc2f503dc8a3194633
extra : source : 4013d256b5910404cc04bb3caaf696b8ee551fc5
The inputs to scripts for GENERATED_FILES are restricted to filenames
only. We have several examples in the tree, however, where a script
takes non-filename arguments. For converting those cases to use
GENERATED_FILES, we first need to provide some way of "injecting"
non-filename arguments into the script.
This commit adds a method for doing that, by extending the .script flag
on GENERATED_FILES to include an optional method name:
f = GENERATED_FILES['foo']
f.script = 'script.py:make_foo'
will invoke the make_foo function found in script.py instead of the
function named main.
The inputs to scripts for GENERATED_FILES are restricted to filenames
only. We have several examples in the tree, however, where a script
takes non-filename arguments. For converting those cases to use
GENERATED_FILES, we first need to provide some way of "injecting"
non-filename arguments into the script.
This commit adds a method for doing that, by extending the .script flag
on GENERATED_FILES to include an optional method name:
f = GENERATED_FILES['foo']
f.script = 'script.py:make_foo'
will invoke the make_foo function found in script.py instead of the
function named main.
As content in moz.build files has grown, it has become clear that
storing everything in one global namespace (the "context") per moz.build
file will not scale. This approach (which is carried over from
Makefile.in patterns) limits our ability to do things like declare
multiple instances of things (like libraries) per file.
A few months ago, templates were introduced to moz.build files. These
started the process of introducing separate contexts / containers in
each moz.build file. But it stopped short of actually emitting multiple
contexts per container. Instead, results were merged with the main
context.
This patch takes sub-contexts to the next level.
Introduced is the "SubContext" class. It is a Context derived from
another context. SubContexts are special in that they are context
managers. With the context manager is entered, the SubContext becomes
the main context associated with the executing sandbox, temporarily
masking the existence of the main context. This means that UPPERCASE
variable accesses and writes will be handled by the active SubContext.
This allows SubContext instances to define different sets of variables.
When a SubContext is spawned, it is attached to the sandbox executing
it. The moz.build reader will now emit not only the main context, but
also every SubContext that was derived from it.
To aid with the creation and declaration of sub-contexts, we introduce
the SUBCONTEXTS variable. This variable holds a list of classes that
define sub-contexts.
Sub-contexts behave a lot like templates. Their class names becomes the
symbol name in the sandbox.
--HG--
extra : rebase_source : 5df4bcf073ce46605b972021f1e918ce4affa6f3
The regular expression cache for mozpack.path.match was keyed off the
original pattern. However, that variable was mutated as part of the
function and the mutated result was subsequently stored as the cache
key. This effectively resulted in a 0% cache hit rate.
On some tests being written for bug 1132111 which involve a full
filesystem traversal for moz.build files and subsequent execution of
those files, the following timings are indicative of the impact of this
patch.
Before:
real 16.082s
user 14.760s
sys 1.318s
After:
real 6.345s
user 5.085s
sys 1.257s
--HG--
extra : rebase_source : caf4a9f37fda9b43b444059f647535e1b085d422
Support for a callback to be executed post sandbox evaluation was added
in 24b43ecb4cad (bug 949906) to unbust Sphinx as a result of some GYP
processing changes. e93c40d4344f and bug 1071012 subsequently changed
how Sphinx variables are extracted from moz.build, removing the only
consumer of this feature.
Since there are no consumers of this feature left, remove it and make
the code simpler.
--HG--
extra : rebase_source : 95bb619aba0bb00719e060a4cf346b9dd51eb541
extra : amend_source : ef195e478b122a18f89ad0b2b2b1aab4a4090e42
extra : histedit_source : 36a464ace2fe261b1315544c9ee6a23fc0ab71a2
The value of cache files or cache size might decrease under these
circumstances.
* The user manually changes the max cache size to a value smaller than
the current cache size.
* The cache size is reaching the cache max size.
We should not assume both values will be increased after the build is
finished.
The Android ARchive contains the compiled Gecko libraries that Firefox
for Android interfaces to. It does not contain the Gecko resources
(the omnijar, omni.ja) nor the compiled Java code (classes.dex).
This also uploads metadata and sha1 hashes for future consumption by
Maven and/or Ivy dependency managers. In some brave future world,
we'll work out exactly what that looks like; for now, this solves a
storage problem (each .aar file is ~20MB) and it's possible to point
Gradle directly at the uploaded Ivy metadata and artifacts.
--HG--
extra : rebase_source : 0c12b44f587d4a027ca5258bae8fcbb6f6028c24
Now that we have proper moz.build objects for GENERATED_FILES, we can
add 'script' flags and 'args' flags in moz.build for select
GENERATED_FILES. We restrict 'args' to being filenames for ease of
implementing checks for file existence, and many (all?) of the examples
of file generation throughout the tree don't need arbitrary strings or
Python data.
This patch is mostly useful for being able to see these changes
independently of the major changes to GENERATED_FILES. We are going to
need proper moz.build objects for GENERATED_FILES when we add the
ability to define scripts and arguments for them, so we might as well do
that first.
With the previous changes, we can now tell Visual Studio about the
actual unified files, rather than the files that are #include'd in them.
I believe this is much closer to what Visual Studio wants to see, and
enables things like Intellisense to work properly.
CommonBackend is where the writing of the unified files belong, since
that's an operation common to all backends. A special hook into
subclasses is used to enable subclass-specific processing of
UnifiedSources.
UnifiedSources will be processed outside of consume_object, so we need
some way of accessing the backend file for an object outside of
consume_object as well.
UnifiedSources, not the recursivemake backend, should be responsible for
figuring out what unified files to generate, and what those unified
files should contain.
Generating the list of idl deps to generate an xpt from its dependency list
makes us give all _previous_ dependencies, inherited from the .deps makefiles.
This leads to removed files being listed on xpidl-process.py command line, and
the command subsequently failing.
Instead, use generated lists of idl dependencies. At the same time, lighten the
generated Makefile further by not emitting xpt dependencies on their containing
directory, and instead generating it from the $xpt_files list.
This brings down the Makefile size from 100k to 38k.
Similar to the changes made for IPDL files, this commit moves all of the
non-makefile related logic for WebIDL files out of the recursive make
backend and into the common build backend. Derivative backends that
would like to do interesting things with WebIDL files now need to
implement _handle_webidl_build, which takes more parameters, but should
ideally require less duplication of logic.
After a bunch of tiny changes, we're finally ready to make real
progress. We can now move the grouping of the generated IDPL C++ files
and the actual writing of the unified files for them into the common
build backend. Derivative backends now only have to concern themselves
with adding the particular logic that compiling those files requires.
We'll need to write out unified files for multiple backends, not just
the recursive make one. Put that logic someplace where all build
backends can access it.
_add_unified_build_rules shouldn't be in the business of determining how
to group files into their unified files. That logic belongs in the
caller of _add_unified_build_rules. Once that's done, the logic for
determining how to group files can migrate out of the recursive make
bakend.
Nothing about writing unified files is specific to the recursive make
backend, and if we want to write the unified files for IPDL and WebIDL
files, we'll need this functionality available in the common build
backend.
RecursiveMakeBackend._group_unified_files doesn't contain any
functionality specific to the recursive make backend. We would also
like to move the unification of generated IPDL and WebIDL source files
into the common build backend. Moving _group_unified_files into the
common build backend would be the logical place for it, but the frontend
should also be able to handle unifying files so that backends don't have
to duplicate logic for UNIFIED_FILES. Therefore, we choose to move it
to mozbuild.util as its final resting place.
Pushing on a CLOSED TREE since Android build only.
--HG--
extra : rebase_source : cc99efa734d1f4738d3f026c930a4d1955723783
extra : amend_source : 52c07807a77f263d2eb2593826dc0285928d9be4
This handles:
list.0=A
list.1=B
list.sublist.0=C
so that
list=>[A, B]
list.sublist=>[C]
and
dict=default
dict.key1=A
dict.key2=B
so that
dict=>{key1:A, key2:B}
dict=>default
--HG--
extra : rebase_source : 2c8f5941d2fca9c56b3858d3e98078a43fa69090
Writing the unified files is another thing that will have to be moved
out of recursivemake.py eventually. And it doesn't belong inline amidst
makefile rules and variables. Move its logic to a separate function as
well.
_add_unified_build_rules does quite a lot of work besides adding
makefile rules and variables. The divying up of source files into
unified files is one part of that, so move it out into its own function.
When we eventually move that computation out of recursivemake.py, this
refactoring will make it easier to verify that's what we've done.
Python API documentation requires the ability to import modules. So, we
set up a virtualenv in our Sphinx environment so module loading works.
This solution isn't perfect: a number of modules fail to import when run
under sphinx-build.
--HG--
extra : rebase_source : fce732e0b8aefe0e9a2ee594b8a08ac02e27579a
extra : histedit_source : bef27c947b95c3182fbc6cd656ae8c96acaaa6f4
Previously, code for staging the Sphinx documentation from moz.build
metadata lived in a mach command and in the moztreedocs module. This
patch moves the invocation to the Sphinx extension.
When the code is part of the Sphinx extension, it will run when executed
with sphinx-build. This is a prerequisite to getting RTD working, since
sphinx-build is the only supported entrypoint for generating
documentation there.
With this patch, we can now invoke sphinx-build to build the
documentation. The `mach build-docs` command is no longer needed.
--HG--
extra : rebase_source : 86e76c7d598ffa23dae858254eecedbdd12706a4
extra : histedit_source : 1826aa5ddfafdff62847cc293d1f0950b236caac
The recursivemake backend knows how to do several things with the IPDL
sources:
1) Determine the C++ sources that will be generated from given IPDL
sources.
2) Write out all the makefile rules and variables for said sources.
The first part isn't unique to the recursivemake backend; other backends
would eventually like to know what C++ sources come from IPDL source
files for easier cross-referencing purposes, etc. Let's take a first
cut at moving things into CommonBackend. (This may not be the best
interface, since it relies on consume_finished being invoked, and not
all backends call CommonBackend.consume_finished. Still, it's a start.)
Various bits of the test harnesses key off of mozinfo.info.get('asan');
we will need a similar switch for finding out whether this build
supports tsan.
Now that the mozbuild backend knows about FINAL_TARGET, we are able to
install generated xpt files into their final location. This saves us
from copying xpt files into their final location on every build.
Original patch by gps, rebased and comments addressed by Ms2ger
--HG--
extra : transplant_source : %E2%DC%0F%E0%AD%C2%25%A1%B8%A9%FE%B0%8C%60%FF%CB%02G%25%E5
mopack.BaseFile.copy() performs a generic read/write file copy. Windows
has an explicit CopyFile() call that tests have shown to be
significantly faster. Let's use that instead via the magic of ctypes.
Having SOURCES and its close relatives go through VariablePassthru
objects clutters the handling of VariablePassthru in build backends and
makes it less obvious how to handle things that actually get compiled.
Therefore, this patch introduces four new moz.build objects
corresponding to the major variants of SOURCES. It looks like a large
patch, but there's an ample amount of new tests included, which accounts
for about half of the changes.
Now that defining $DMD is no longer necessary to run DMD, this patch does the
following.
- Removes all the places where we set DMD=1 (test harnesses, etc.)
- Still handles DMD=1, for backwards compatibility.
- Prints "$DMD is undefined" at DMD start-up if appropriate.
- Writes a |null| value for |dmdEnvVar| in the JSON if $DMD is undefined. Bumps
the DMD output version number accordingly.
- Changes a bunch of the test files accordingly, including changing the mode of
script-ignore-alloc-fns.json in order to test a case where $DMD is undefined.
--HG--
extra : rebase_source : eb1ef5722410734ce6d7658465ff6f442ee4ed49
Various os.path attributes are being used in tight loops. Having local
variables prevents extra dictionary lookups.
This appears to shave 10-20ms off of the tests install manifest
processing time.
--HG--
extra : rebase_source : de941f2978cf0b1fd7c4f7401c848b61d406a2c8
extra : amend_source : e33c896856fa559197496b8227e10ab8149d146e
FileCopier.copy() was performing a lot of os.path.normpath() operations.
Profiling revealed that os.path.normpath() was the function with the
most wall time CPU usage when processing the tests manifests. Upon
subsequent examination of the code in question, all the paths being used
were already normalized. So, os.path.normpath() wasn't accomplishing
anything.
This patch results in ~300ms reduction in wall time to process the tests
install manifest on a fully populated page cache. Execution time drops
from ~2.8s to ~2.5s.
Profiling reveals that after this patch os.stat() is the #1 wall time
consumer. However, os.path.{join,dirname,normpath} still account for
~1.5x the wall time of os.stat(). There is still room to optimize
this function.
--HG--
extra : rebase_source : b6f0862baa5168c609499fd95eb3517854bc8cce
extra : amend_source : 7e04c1eb74132bbbe86e721f0f209b19309a7a51
This patch moves profiling mode selection from post-processing (in dmd.py) to
DMD start-up. This will make it easier to add new kinds of profiling, such as
cumulative heap profiling.
Specifically:
- There's a new --mode option. |LiveWithReports| is the default, as it is
currently.
- dmd.py's --ignore-reports option is gone.
- There's a new |mode| field in the JSON output.
- Reports-related operations are now no-ops if DMD isn't in LiveWithReports
mode.
- Diffs are only allowed for output files that have the same mode.
- A new function ResetEverything() replaces the SetSampleBelowSize() and
ClearBlocks(), which were used by the test to change DMD options.
- The tests in SmokeDMD.cpp are split up so they can be run multiple times, in
different modes. The exact combinations of tests and modes has been changed a
bit.
--HG--
rename : memory/replace/dmd/test/full-reports-empty-expected.txt => memory/replace/dmd/test/full-empty-dark-matter-expected.txt
rename : memory/replace/dmd/test/full-heap-empty-expected.txt => memory/replace/dmd/test/full-empty-live-expected.txt
rename : memory/replace/dmd/test/full-heap-sampled-expected.txt => memory/replace/dmd/test/full-sampled-live-expected.txt
rename : memory/replace/dmd/test/full-reports-unsampled1-expected.txt => memory/replace/dmd/test/full-unsampled1-dark-matter-expected.txt
rename : memory/replace/dmd/test/full-heap-unsampled1-expected.txt => memory/replace/dmd/test/full-unsampled1-live-expected.txt
rename : memory/replace/dmd/test/full-reports-unsampled2-expected.txt => memory/replace/dmd/test/full-unsampled2-dark-matter-expected.txt
rename : memory/replace/dmd/test/script-diff-basic-expected.txt => memory/replace/dmd/test/script-diff-live-expected.txt
rename : memory/replace/dmd/test/script-diff1.json => memory/replace/dmd/test/script-diff-live1.json
rename : memory/replace/dmd/test/script-diff2.json => memory/replace/dmd/test/script-diff-live2.json
extra : rebase_source : bf32cc4e0d82aa1a20ceb55e8ea259850b49cc06
This is a straight copy from
a878bf0ba0
paired with a tiny change to use the new quote_chars option.
--HG--
extra : rebase_source : 75d604ffafc7062c663bca4242af35546d2c1e3a
buildlist invocations are slow and can occur in parallel since the
underlying program obtains a lock on the modified file.
Moving the XPT-related buildlist invocation from the serial libs tier to
the parallel misc tier decreased my no-op build time on OS X from 43.5s
to 37.0s.
--HG--
extra : rebase_source : 7d274024c401b1ecfbc771424a69eb487808fcbf
When the misc tier was added, only directories with misc-associated
variables from moz.build were traversed. This patch adds a dummy
variable to moz.build whose presence will add the directory to the misc
tier.
This will enable us to aggressively convert existing libs:: rules
to the misc tier.
--HG--
extra : rebase_source : c5477010c7d87fafa512f4fbbf1d2b995e07ffbd
extra : amend_source : a54eee6254ef2d63ef6b247d45e38eda2ab88fef
extra : histedit_source : c0f3352e84ec18ca7b88571cd5f9c2a33ab8eda9
JS module installation performs simple file copying or preprocessing.
There is no reason it can't occur in parallel. Move it to the misc tier.
As part of this, I recognized that TESTING_JS_MODULES was assigned to a
tier. Since these files are managed by an install manifest, they don't
belong to any tier. So the tier is now listed as None.
--HG--
extra : rebase_source : 963f0813e43802c017837ce9d55f8e666decd76a
The build system being what it currently is, there are various cases where one
wants something explicit, rather than the current autodetection.
For instance, one may want to run
make -C $objdir chrome
instead of
make -C $objdir/chrome
that mach build chrome currently invokes.
There are several such usecases that mach's autodetection makes harder, and
it's sometimes awkward when telling people, to debug their issues, to run
make -C objdir something
and hear back that objdir doesn't exist or something along those lines,
because they took "objdir" literally.
There are, sadly, many combinations of linkage in use throughout the tree.
The main differentiator, though, is between program/libraries related to
Gecko or not. Kind of. Some need mozglue, some don't. Some need dependent
linkage, some standalone.
Anyways, these new templates remove the need to manually define the
right dependencies against xpcomglue, nspr, mozalloc and mozglue
in most cases.
Places that build programs and were resetting MOZ_GLUE_PROGRAM_LDFLAGS
or that build libraries and were resetting MOZ_GLUE_LDFLAGS can now
just not use those Gecko-specific templates.
It's not entirely clear passing down all the metadata makes sense. On the
other hand, when creating the template execution sandbox, passing down
exports does assign the value for the exported variable in that execution
context. When that context is merged with the caller sandbox context, the
exported variable is reassigned, even if the value is not modified. Then,
if the caller sandbox itself reassigns the exported variable, it fails
because calling a template already did it once, unexpectedly.
Not passing down exported variables makes the template execution sandbox
never set those exported variables, so that they are not merged back. The
caller sandbox can then properly reassign the exported variable.
The in-tree Sphinx docs have been broken since bug 1041941 because
processing moz.build files outside their context doesn't work.
Specifically, templates aren't loaded (because this information usually
comes from a parent moz.build file). A new execution mode is needed.
I tried to implement a proper execution mode. However, I kept running
into walls. While we should strive for a proper execution mode, this can
be a follow-up, tracked in bug 1058359.
This patch implements extraction of Sphinx variables from ast walking.
It is extremely low-level and definitely a one-off. But it solves the
problem at hand: |mach build-docs| will work after this patch is
applied.
--HG--
extra : rebase_source : abd0a91a3efb24d3adfa19f4cd281ce5fd6d0915
extra : amend_source : c1b4f79224bab55e65a8c2b0f3103475281416c1
The reason to use '+' prefixing was to distinguish between options to the
mach command itself, and options that are passed down to whatever the
command does (like mach run passing down args to the built application).
That makes things unnecessarily awkward, and quite non-standard.
Instead, use standard '-' prefixing, and pass all the unknown arguments
down. If there is overlap between the known arguments and arguments supported
by the underlying tool (like -remote when using mach run), it is possible to
use '--' to mark all following arguments as being targetted at the underlying
tool.
For instance:
mach run -- -remote something
would run
firefox -remote something
while
mach run -remote something
would run
firefox something
As allow_all_arguments is redundant with the presence of a argparse.REMAINDER
CommandArgument, allow_all_arguments is removed. The only mach command with a
argparse.REMAINDER CommandArgument without allow_all_arguments was "mach dmd",
and it did so because it didn't want to use '+' prefixes.
These manifests are special in that they don't package their test files
into the test package. Each test listed in an instrumentation manifest
serves as an identifier rather than a file.
--HG--
extra : rebase_source : 0321528a2dc380e57b824746efbcf61d295204a9
Up to now, DIRS and TEST_DIRS were dumb values. This change makes them
a list of ContextDerivedValues, and handles the fact that some types of
paths are relative to the current source directory and others to the
topsrcdir.
This also makes us one step closer to fixing bug 991983.
After bug 762358 mk_add_options MOZ_MAKE_FLAGS was simply ignored in client.mk
processing. At the same time, mach environment was expecting a list of options
while the mozconfig reader returned a single string, so straighten this up at
the same time.
Having to walk over elements and strings of HierarchicalStringList with
an external recursive function is un-Pythonic and adds unnecessary
obfuscation to several tasks. Add a walk() function to
HierarchicalStringList, modeled on os.walk(), to handle these cases more
directly.
The forward slash appears to be the standard path separator in zip/JAR
files. Accept back slashes when adding paths to a JAR.
--HG--
extra : rebase_source : bd94eab36b347006e65952d99b53dd397e2ca758
extra : amend_source : 2cefd887d8bb5d989fafb398a3464429ac376e2e
The install manifest processor starts with an empty InstallManifest and
uses |= to "concatenate" instances. It became pretty obvious when
developing some patches that add more preprocessed files to install
manifests that the source install manifest dependency was getting
lost during the |= operation. This patch fixes it.
The solution is not ideal performance wise. But slightly worse
performance (only after config.status, however) is better than
clobbers.
A test has been added to ensure this doesn't regress.
--HG--
extra : rebase_source : 848aebbbc935ce2bca2d3fcc85d1df534e734e0d