The current coreaudio-sys in gecko is a custom 0.2.2 version that used
to avoid the cross-compiling issue mentioned in bug 1569003. The issue
has been fixed in the coreaudio-sys 0.2.3, so we should follow the
upstream instead of using a custom version. As a result, the
coreaudio-sys would generate API bindings based on the MacOS SDK defined
in the build settings.
Differential Revision: https://phabricator.services.mozilla.com/D50531
--HG--
extra : moz-landing-system : lando
If the run task generates bad profile data, the merge step in the
profile-use task will fail. However, retrying the profile-use task
doesn't fix the problem, and there isn't a straightforward way to retry
the run task in this situation. Instead we can add a clang toolchain to
all the run tasks, and perform the merge there.
This means the output from the run task will always be a successfully
merged file called 'merged.profdata', and we no longer need to perform
the merge as part of the profile-use build as a GENERATED_FILES step.
Depends on D45262
Differential Revision: https://phabricator.services.mozilla.com/D45263
--HG--
extra : moz-landing-system : lando
We don't actually care that much about LTO'ing the rust parts of libxul
for gtests, and not LTO'ing it would save multiple minutes of build time
on automation.
Differential Revision: https://phabricator.services.mozilla.com/D42812
--HG--
extra : moz-landing-system : lando
With the addition of toolkit/library/build because of the rust
shenanigans, bug 1573314 and bug 1572046 don't do anything useful
anymore. We're going to do something better anyways.
Differential Revision: https://phabricator.services.mozilla.com/D42251
--HG--
extra : moz-landing-system : lando
When a directory, like toolkit/library, builds both a static and a
shared library, and another, like toolkit/library/gtest, depends on the
static part, it currently needs to wait for the shared library to be
finished building, preventing both libraries being built in parallel.
By separating shared libraries to a different target, we allow more
parallelism to the build.
Differential Revision: https://phabricator.services.mozilla.com/D41099
--HG--
extra : moz-landing-system : lando
We don't need to run binary checks on the instrumentation builds, only the final optimized build.
Differential Revision: https://phabricator.services.mozilla.com/D38382
--HG--
extra : moz-landing-system : lando
When enabling neon (--with-fpu=neon, or when the C++ compiler defaults
to use neon), we pass +neon as a target feature to the rust compiler.
That enables neon in rust, which is the default with the
thumbv7neon-linux-gnueabihf rust target, but not the default for the
armv7-unknown-linux-gnueabihf rust target.
ARM processors may have various different FPUs, with different number of
registers. On ARMv7, there are FPUs with 16 registers and FPUs with 32
registers. NEON requires 32 registers.
Because the common denominator for ARMv7 is 16 registers, the
armv7-unknown-linux-gnueabihf rust target defaults to 16 registers,
although by enabling neon, we're guaranteed the processor will have 32.
But while the rust compiler keeps limited to 16 registers, it also hits
a wall while compiling the hyper crate, where it finds it doesn't have
enough registers (which in itself can be considered a bug).
Since enabling neon means there are 32 registers available, it makes
sense to tell the compiler to lift the restricted use of FPU registers,
and that's what the `-d16` target feature does.
That's the default for the thumbv7neon-linux-gnueabihf rust target, so
nothing is changed, there, and fixes things for the
armv7-unknown-linux-gnueabihf rust target.
Differential Revision: https://phabricator.services.mozilla.com/D33907
--HG--
extra : moz-landing-system : lando
We weren't honoring the case where the library features differ from the tests
features (situation which my previous patch does).
We were incorrectly overriding `rust_feature_flags`, which of course ended up
with a working rusttests with my patches, but a bunch of negative leaks :)
Name the test features differently so that they don't affect the regular library
features.
Differential Revision: https://phabricator.services.mozilla.com/D32777
--HG--
extra : moz-landing-system : lando
We add to `GARBAGE_DIRS` in the toplevel `Makefile.in` because all of
our Rust libraries share a single `CARGO_TARGET_DIR`, located in
topobjdir.
We add to `GARBAGE_DIRS` for Rust programs because Rust programs
currently do not share compilation artifacts with Rust libraries (as our
libraries are built with `panic=abort` and our programs are not, sharing
compilation artifacts between the two is a non-starter).
Differential Revision: https://phabricator.services.mozilla.com/D26762
--HG--
extra : moz-landing-system : lando
The current setup uses different ways for different platforms, with
different workarounds, even using extra configuration items for Windows.
Now that there can't be a difference between the host per the build
system and the host per rust, we can get rid of those configuration
items, and use a more common infrastructure.
We cannot, however, avoid using wrapper scripts, because per-target rust
link-arg flags don't work up great.
The downside is that multiplies the number of wrappers, as we now have
to have a different one for host and target, and then we have .bat files
and shell scripts for, respectively, Windows hosts, and other hosts.
Depends on D24321
Differential Revision: https://phabricator.services.mozilla.com/D24322
--HG--
extra : moz-landing-system : lando
While the substitution pattern is kind of awful in make, it will allow
to more straightforwardly deal with the difference between target and
host.
Differential Revision: https://phabricator.services.mozilla.com/D24321
--HG--
extra : moz-landing-system : lando
The current setup uses different ways for different platforms, with
different workarounds, even using extra configuration items for Windows.
Now that there can't be a difference between the host per the build
system and the host per rust, we can get rid of those configuration
items, and use a more common infrastructure.
We cannot, however, avoid using wrapper scripts, because per-target rust
link-arg flags don't work up great.
The downside is that multiplies the number of wrappers, as we now have
to have a different one for host and target, and then we have .bat files
and shell scripts for, respectively, Windows hosts, and other hosts.
Depends on D24321
Differential Revision: https://phabricator.services.mozilla.com/D24322
--HG--
extra : moz-landing-system : lando
While the substitution pattern is kind of awful in make, it will allow
to more straightforwardly deal with the difference between target and
host.
Differential Revision: https://phabricator.services.mozilla.com/D24321
--HG--
extra : moz-landing-system : lando
While this isn't related to the bug, since we're going to touch the
cargo compiler flags, we might as well do this too.
It wasn't previously reliable to pass those flags down because what
cargo uses as target for build scripts and procedural macros is
determined by the rust host, which was not necessarily the same as the
build system host. But as of bug 1523143, they are always the same.
Differential Revision: https://phabricator.services.mozilla.com/D18280
--HG--
extra : moz-landing-system : lando
Now that Make invokes cargo without going through an msys shell,
environment variables are going to be preserved properly, and we can now
"safely" pass the compiler-related variables down to cargo on Windows.
This makes rust target builds use the expected compiler and flags,
instead of the cc-rs crate guessing, picking cl.exe, and using the wrong
one, with the build later failing when linking it all together because
one of the objects is not for the right target.
Interestingly, the lmdb code is today built for the wrong target on
aarch64, but somehow, it doesn't break the build on automation,
presumably because the lmdb code is actually dead code, and the linker
eliminates the object as unused, masking the problem.
Depends on D18186
Differential Revision: https://phabricator.services.mozilla.com/D18187
--HG--
extra : moz-landing-system : lando
Double quotes on a command line forces Make to use a msys shell when
invoking the command. Single quotes don't have this effect. This is the
last bit that prevented Make from invoking cargo directly on Windows.
Depends on D18184
Differential Revision: https://phabricator.services.mozilla.com/D18185
--HG--
extra : moz-landing-system : lando
These require some awkward setup to keep things working on
non-cross-compiles on non-Windows, but we'll change that shortly in a
later bug.
Depends on D18183
Differential Revision: https://phabricator.services.mozilla.com/D18184
--HG--
extra : moz-landing-system : lando
This is a drive-by change, allowing to keep the
force-cargo-library-build recipe more like the others.
Depends on D18181
Differential Revision: https://phabricator.services.mozilla.com/D18182
--HG--
extra : moz-landing-system : lando
The `env` program, on windows, comes from msys, so invoking `env cargo`
guarantees an msys roundtrip, which usually breaks environment variable
in interesting ways.
This moves most of the environment variables we set with `env` (the
easiest ones) to exporting the same values from make itself.
Depends on D18180
Differential Revision: https://phabricator.services.mozilla.com/D18181
--HG--
extra : moz-landing-system : lando
Because we're going to change how cargo recipes are called to export
environment variables rather than by wrapping the call with `env`, to
avoid msys roundtrips, it's better to avoid the complexity when not
building rust, and including a separate file only when required helps
with that. It is also possible to wrap the entire rust section of
rules.mk in the same condition we use for the include, but using a
separate file also makes things clearer.
Differential Revision: https://phabricator.services.mozilla.com/D18180
--HG--
rename : config/rules.mk => config/makefiles/rust.mk
extra : moz-landing-system : lando
The build system has skipped creating target static libraries for very
long, except in very specific cases.
We can actually do the same for host static libraries, for which we
don't even need the escape hatch to still allow to create static
libraries.
Depends on D15171
Differential Revision: https://phabricator.services.mozilla.com/D15172
--HG--
extra : moz-landing-system : lando
Summary:
This patch ports xptcodegen.py over to the new perfecthash.py system, removing
some special-case code generators, and taking advantage of the easier-to-use
interface.
In addition, the code was changed to take advantage of the endianness
information from Part 2, allowing us to avoid having to perform endianness swaps
at runtime when hashing nsIDs.
Depends On D2616
Reviewers: froydnj!
Tags: #secure-revision
Bug #: 1479484
Differential Revision: https://phabricator.services.mozilla.com/D2618
The build system knows at build-backend time where to find each IDL
file; making xpidl-process.py rediscover this by requiring
xpidl-process.py to search through directories to find input IDL files
is silly. To rememdy this, we're going to modify things so full paths
are passed into the script. Those paths can then be used directly, with
no searching.
The tail end of the xpidl Makefile.in contains a line, generated for
every xpt file:
$(1): $(addsuffix .idl,$(addprefix $(dist_idl_dir)/,$($(basename $(notdir $(1)))_deps)))
This line, in context, is saying that the xpt file depends on all of its
input IDL files. But xpidl-process.py already generates this
information when we pass it --depsdir, which we do. So this code is
redundant with what we already generate, and it can be removed.
The previous patch required us to pass a single -I argument pointing at
$(DIST)/idl so IDL include statements would work correctly. This patch
lifts that limitation and explicitly points xpidl-process.py at the
locations of all the IDL source directories to search for included IDL
files. Invocations of xpidl-process.py no longer depend on IDL files
being copied to the objdir.
Building on the last patch, we can change the build process to pass in
the directories where the input IDL files can be found. It is
convenient to pass in just the relative source directory paths, to
encourage people to not look in the object directory and to make the
command lines slightly shorter.
xpidl-process.py still assumes that included IDL files can be found by
looking in a single directory. We add a single -I argument to the
invocation of xpidl-process.py to accommodate this short-sightedness.
The current IDL build setup assumes that all IDL files can be found in a
single directory. This setup requires that all IDL files be copied to a
single directory, which is suboptimal in terms of disk I/O and also
complicates things like generating IDL files at build time.
As a first step in moving away from this state of affairs,
xpidl-process.py needs to be taught that the input IDL files could
potentially be found in multiple directories. The current setup can
just specify $(DIST)/idl as the lone directory to examine. Future
patches will change this to examine multiple directories.
This patch contains the meat of the changes here. The following summarize the changes:
1. xptinfo.h is rewritten to expose the new interface for reading the XPT data,
The nsXPTInterfaceInfo object exposes methods with the same signatures as
the methods on nsIInterfaceInfo, to make converting code which used
nsIInterfaceInfo as easy as possible, even when those methods don't have
signatures which make a ton of sense anymore. There are also a few methods
which are unnecessary (they return `true` or similar), which should be
removed over time.
Members of the data structures are made private in order to prevent reading
them directly. Code should instead call the getter methods. This should make
it easier to change their memory representation in the future. Constructing
these structs is made possible by making the structs `friend class` with the
XPTConstruct class, which is implemented by the code generator, and is able
to access the private fields.
In addition, rather than using integers with flag constants, I opted for
using C++ bitfields to store individual flags, as I found it made it easier
to both write the code generator, and reason about the layouts of the types.
I was able to shave a byte off of each nsXPTParamInfo (4 bytes -> 3 bytes)
by shoving the flags into spare bits in the nsXPTType. Unfortunately there
was not enough room for the retval flag. Fortunately, we already depend in
our code on the retval parameter being the last parameter, so I worked
around this by removing the retval flag and instead having a `hasretval`
flag on the method itself.
2. An xptinfo.cpp file is added for out-of-line definitions of more complex
methods, and the internal implementation details of the perfect hash.
Notable is the handling of xptshim interfaces. As the type is uniform, a
flag is checked when trying to read constant information, and a different
table with pointers into webidl data structures is checked when the type is
determined to be a shim.
Ideally we could remove this once we remove the remaining consumers of the
existing shim interfaces.
3. A python code generator which takes in the json XPT files generated in the
previous part, and emits a xptdata.cpp file with the data structures. I did
my best to heavily comment the code.
This code uses the friend class trick to construct the private fields of the
structs, and avoid a dependency on the ordering of fields in xptinfo.h.
The sInterfaces array's order is determined by a generated perfect hash
which is also written into the binary. This should allow for fast lookups by
IID or name of interfaces in memory. The hash function used for the perfect
hash is a simple FNV hash, as they're pretty fast.
For perfect hashing of names, another table is created which contains
indexes into the sInterfaces table. Lookup by name is less common, and this
form of lookup should still be very fast.
4. The necessary Makefiles are updated to use the new code generator, and
generate the file correctly.
This patch handles the actual generation of the static data structures
used to represent XPT information. XPT files are generated in the same
way as they are now, but they are used only as an intermediate
representation to speed up incremental compilation rather than
something used by Firefox itself. Instead of linking XPTs into a
single big XPT file at packaging time, they are linked into a single
big C++ file at build time, that defines the various static consts in
XPTHeader.
In xpt.py, every data structure that can get written to disk gets an
additional code_gen() method that returns a representation of that
data structure as C++ source code. CodeGenData aggregates this
information together, handling deduplication and the final source code
generation.
The ctors are needed for XPTConstValue to statically initialize the
different union cases without resorting to designated initializers,
which are part of C99, not C++. Designated initializers appear to be
supported in C++ code by Clang and GCC, but not MSVC. The ctors must
be constexpr to ensure they are actually statically initialized so
they can be shared between Firefox processes.
I also removed an unnecessary "union" in XPTConstDescriptor.
Together, these patches reduce the amount of memory reported by
xpti-working-set from about 860,000 bytes to about 200,000 bytes. The
remaining memory is used for xptiInterface and xptiTypelibGuts (which
are thin wrappers around the XPT interfaces and header) and hash
tables to speed up looking up interfaces by name or IID. That could
potentially be eliminated from dynamic allocations in follow up
work. These patches did not affect memory reporting because XPT arenas
are still used by the remaining XPTI data structures.
MozReview-Commit-ID: Jvi9ByCPa6H
--HG--
extra : rebase_source : a9e48e7026aab4ad1b7f97e50424adf4e3f4142f
Now that XPT files are not loaded from files at runtime, code for
packaging XPT files can be removed.
This means that a couple of test XPIDL interfaces will get shipped in
builds to users that weren't before, but I don't think that matters
much.
This also puts XPT files into the local objdir for the XPIDL makefile,
instead of dist/bin, because they are no longer part of the
distribution.
MozReview-Commit-ID: 7gWj8KWUun3
--HG--
extra : rebase_source : 65bac47c2cd1a20b3c675a01b44a25a1d2d3ab7a
This patch handles the actual generation of the static data structures
used to represent XPT information. XPT files are generated in the same
way as they are now, but they are used only as an intermediate
representation to speed up incremental compilation rather than
something used by Firefox itself. Instead of linking XPTs into a
single big XPT file at packaging time, they are linked into a single
big C++ file at build time, that defines the various static consts in
XPTHeader.
In xpt.py, every data structure that can get written to disk gets an
additional code_gen() method that returns a representation of that
data structure as C++ source code. CodeGenData aggregates this
information together, handling deduplication and the final source code
generation.
The ctors are needed for XPTConstValue to statically initialize the
different union cases without resorting to designated initializers,
which are part of C99, not C++. Designated initializers appear to be
supported in C++ code by Clang and GCC, but not MSVC. The ctors must
be constexpr to ensure they are actually statically initialized so
they can be shared between Firefox processes.
I also removed an unnecessary "union" in XPTConstDescriptor.
Together, these patches reduce the amount of memory reported by
xpti-working-set from about 860,000 bytes to about 200,000 bytes. The
remaining memory is used for xptiInterface and xptiTypelibGuts (which
are thin wrappers around the XPT interfaces and header) and hash
tables to speed up looking up interfaces by name or IID. That could
potentially be eliminated from dynamic allocations in follow up
work. These patches did not affect memory reporting because XPT arenas
are still used by the remaining XPTI data structures.
MozReview-Commit-ID: Jvi9ByCPa6H
--HG--
extra : rebase_source : 719dfbcb9f83235c0f1f0766270b7f127f9ab04e
Now that XPT files are not loaded from files at runtime, code for
packaging XPT files can be removed.
This means that a couple of test XPIDL interfaces will get shipped in
builds to users that weren't before, but I don't think that matters
much.
This also puts XPT files into the local objdir for the XPIDL makefile,
instead of dist/bin, because they are no longer part of the
distribution.
MozReview-Commit-ID: 7gWj8KWUun3
--HG--
extra : rebase_source : 6f7d4fd1d6cdea2c14866705a2dc972eb5f43382