ISO C++ forbids converting a string constant to 'wchar_t*' [-Werror=write-strings]
Either change it to a nullptr (which has same intent) or pass through a static
MozReview-Commit-ID: CSunOCyO9PN
--HG--
extra : rebase_source : bfdabc1f463eca75987c6561f7c3ea60acf0340f
Now that nsIAtom is non-scriptable, a .idl file isn't needed.
I made the new nsIAtom.h file by starting with a generated nsIAtom.h file, and
then cleaning it up and removing some stuff that wasn't necessary.
--HG--
extra : rebase_source : 9655fd38984512bd96cf5555048f7774414f6d92
Properly enclose all relevant details of CPUUsageWatcher in ifdefs
which control whether it should be active or not. Additionally,
apparently clock_gettime is not defined on OSX prior to 10.12, so
this is failing to compile for OSX on the build server, but not
locally. However, clock_get_time and getrusage should cover our
use cases sufficiently.
MozReview-Commit-ID: Ffi6yXLb9gO
--HG--
extra : rebase_source : 84f9cf3b2074883dc6fe6d5a50ff27ffdb008a4f
We would like to be able to see if a given hang in BHR occurred
under high CPU load, as this is an indication that the hang is
of less use to us, since it's likely that the external CPU use
is more responsible for it.
The way this works is fairly simple. We get the system CPU usage
on a scale from 0 to 1, and we get the current process's CPU
usage, also on a scale from 0 to 1, and we subtract the latter
from the former. We then compare this value to a threshold, which
is 1 - (1 / p), where p is the number of (virtual) cores on the
machine. This threshold might need to be tuned, so that we
require an entire physical core in order to not annotate the hang,
but for now it seemed the most reasonable line in the sand.
I should note that this considers CPU usage in child or parent
processes as external. While we are responsible for that CPU usage,
it still indicates that the stack we receive from BHR is of little
value to us, since the source of the actual hang is external to
that stack.
MozReview-Commit-ID: JkG53zq1MdY
--HG--
extra : rebase_source : 16553a9b5eac0a73cd1619c6ee01fa177ca60e58
These methods return an addrefed raw pointer, which makes them easy to use in
ways that cause leaks. If they're to continue returning an addrefed pointer,
they should explicitly return an already_AddRefed.
This also switches to StaticRefPtr with ClearOnShutdown for the cached
pointers for the sake of sanity.
MozReview-Commit-ID: D0lDpU8Hqug
--HG--
extra : rebase_source : 7b199070805fc0472eaf8409932517700ed23d49
NS_GetWeakReference, called from do_GetWeakReference, QI's its argument
to nsISupportsWeakReference to determine whether a weak reference can be
obtained. If NS_GetWeakReference is already receiving a
nsISupportsWeakReference pointer, or something than can be converted to
one, then we can skip the QI for a small performance win.
QIing to CC interfaces shows up in Speedometer profiles for a few
classes. Presumably there are many of these objects being created and
destroyed. By making these classes check first for the CC interfaces
directly, rather than going up the inheritance chain, this overhead
should be reduced.
MozReview-Commit-ID: I3sf3my8oua
--HG--
extra : rebase_source : f08884a944d5b4ed1eb1da1070de64f21fc9868a
The main purpose of defining this is to make conversion of places that
use the non-CC variant easier. There are many more places that could
be converted to use these new macros, if somebody felt motivated.
MozReview-Commit-ID: HspjcN76fjg
--HG--
extra : rebase_source : bf3baa586f90f0afbe9229c32d38cb34cc909b9b
We already periodically calculate the ghost window amount after cycle
collection, this just uses a cached value of that for the distinguished amount.
This avoids the overhead of a recalculating the value when reporting telemetry.
--HG--
extra : histedit_source : 4ba3ee62fd2871a87970faca8a70b2284e83981d%2C9246ac39cea50c3bb0d1693d8e831c3b3ad33ad9
After all the previous work, we can now base64 decode nsString types
without intermediate conversion steps to nsCString, which is faster and
more memory-efficient.
The current nsString decoding routine indirectly relies on the various
checks this routine performs; making it generic over string types
ensures that we can eventually call it directly from the nsString
decoding routine.
The existing Base64URL code converts from `const char` to `uint8_t`.
We're going to want versions that convert from character types to
character types, so make the decode routines accept generic input and
output types.
The decoding logic is the same for Base64 and Base64URL; we might as
well reuse the routines that we already have for Base64URL decoding so
we don't make mistakes in the logic.
These tables are nearly identical to the ones for base64url decoding,
but ideally will be slightly more readable, since things are broken up
into sets of eight entries at a time.
After all the previous work, we can base64 encoding nsString values
directly into nsString values, without having to go through intermediate
nsCString values. Since this routine backs base64 routines exposed to
the web, this change should help with OOMs that we see associated with
base64 encoding.
The nsACString -> nsACString encode routine has several checks in it for
correct operation, and the nsAString -> nsAString encode routine relies
on those checks happening via the nsACString -> nsACString routine.
Once we start encoding nsAStrings directly, we'll still need those
checks, and the easiest way to ensure they happen is to move the core
base64 encode logic for strings into a templated helper.
One less use of NSPR is a good thing. The failure cases that
PL_Base64Encode would have caught for us are:
1. "Truncation".
2. Integer overflow when computing destination string length.
3. Malloc failures.
The first one only gets checked if we pass in zero for the source
length, which we never do. The latter two only get checked if we pass
in a null pointer for the destination, which we never do. So removing
the error handling PL_Base64Encode implies here is a good thing.
Its return value is never used, and most implementations return nullptr anyway.
MozReview-Commit-ID: 8rxC053mmE8
--HG--
extra : rebase_source : 61a0b8b1373396182efd27d3c01b96e5e5541364
This removes the double-include macro hackery that we use to define two
separate string types (nsAString and nsACString) in favor of a templated
solution.
Annotations for Valgrind and the JS hazard analysis are updated as well as
the rust binding generations for string code.
This fixes improper usages of Find where an offset was actually being use for
the boolean ignore case flag. It also fixes a few instances of passing in a
literal wchar_t to our functions where a NS_LITERAL_STRING or char16_t should
be used instead.
We now have jemalloc_ptr_info() and moz_malloc_enclosing_size_of(), which can
be used to measure heap blocks via interior pointers. This patch does the
following.
- Adds MOZ_DEFINE_MALLOC_ENCLOSING_SIZE_OF, for defining
measure-via-interior-pointer functions.
- Uses these functions to replace some horrid pointer arithmetic in functions
measuring Rust types.
--HG--
extra : rebase_source : 5128408256c128222025153ae3e0f924b2499a2a
Also adds a mozilla/ResultExtensions.h header to define the appropriate
conversion functions for nsresult and PRResult. This is in a separate header
since those types are not available in Spidermonkey, and this is the pattern
other *Extensions.h headers follow.
Also removes equivalent NS_TRY macros and WrapNSResult inlines that served the
same purpose in existing code, and are no longer necessary.
MozReview-Commit-ID: A85PCAeyWhx
--HG--
extra : rebase_source : a5988ff770888f901dd0798e7717bcf6254460cd
This allows MOZ_TRY and MOZ_TRY_VAR to be transparently used in XPCOM methods
when compatible Result types are used.
Also removes a compatibility macro in SimpleChannel.cpp, and an identical
specialization in AddonManagerStartup, which are no longer necessary after
this change.
MozReview-Commit-ID: 94iNrPDJEnN
--HG--
extra : rebase_source : 24ad4a54cbd170eb04ded21794530e56b1dfbd82
When used as an error value, nsresult should never be NS_OK, which means that
we should be able to safely pack simple nsresult Result values into a single
word.
MozReview-Commit-ID: GJvnyTPjynk
--HG--
extra : rebase_source : ab5a64b545dfbfe9bbef167f8b63ecbf00b16e07
We now have jemalloc_ptr_info() and moz_malloc_enclosing_size_of(), which can
be used to measure heap blocks via interior pointers. This patch does the
following.
- Adds MOZ_DEFINE_MALLOC_ENCLOSING_SIZE_OF, for defining
measure-via-interior-pointer functions.
- Uses these functions to replace some horrid pointer arithmetic in functions
measuring Rust types.
--HG--
extra : rebase_source : 81940a72a4b6355b1e99cede4db40db68d4afad3
Replace it with NS_INTERFACE_MAP_BEGIN_CYCLE_COLLECTION, because it
has been the same for a while.
MozReview-Commit-ID: 5agRGFyUry1
--HG--
extra : rebase_source : 5388c56b2f6905c6ef969150f0c5b77bf247624d
In some unknown circumstance, possibly if the parent process has a
different version than the child process, the child does not receive
scheduler prefs, which makes it read out of an uninitialized local
variable. This can probably happen because the scheduler prefs are
checked before we do the ContentChild::Init version check. Bill also
suggested that the pref env var might be getting truncated.
This patch improves on the situation by using null for the prefs array
if none was sent, and falling back on the default values, which leave
the scheduler disabled.
This also checks that the pref string is at least long enough to avoid
a buffer overflow. Note that if the end of the string isn't an integer
we'll end up with an sPrefThreadCount of zero, which can't be good.
MozReview-Commit-ID: ByHLFMEpgyZ
--HG--
extra : rebase_source : 8f6368b88ec3746f4d1c7716a962bb2ac3c2f3b5
To mitigate risk for beta uplift, the logic here is limited to what we need for QueryDosDeviceW on Win7x64. A better long-term fix would be to combine this with the more general mov logic in the REX.W section.
MozReview-Commit-ID: BykQSYY61Ua
--HG--
extra : rebase_source : 84ad8ff28865cb04e21d6234a09b06202ca4d363
The FileLocation(nsIZipArchive*) constructor is declared, but not actually
implemented, so attempting to use it causes a linking error.
Additionally, when a FileLocation is created from an existing zip archive
(such as one from the zip cache or the Omnijar service), it's helpful to have
direct access to that archive rather than having to open a new copy, or infer
it from the path.
MozReview-Commit-ID: 2U14gAm0FYL
--HG--
extra : rebase_source : 97a420f5cc1a97a24debee3764989916be79bcaf
It appears some Android devices like to send spurious RT signals to our process
which we interpret to mean a gc/cc log is being requested. This causes large
files to pile up in the device storage. We can just disable this feature for
Android as it would be pretty hard for a user to actually use (they can just
go to about:memory). Automation can still enable the FIFO queue if we ever
want to start dumping memory reports on device again.
--HG--
extra : rebase_source : 80cb04473677db7f3edc9377b2fca4c72eb63b71
The virtual call to this method shows up in profiles, and therefore
would be nice to avoid if possible. We can do that by making
nsIWeakReference hold the weak reference itself and therefore
implementing a non-virtual QueryReferent method.
nsQueryReferent is defined as an nsCOMPtr_helper, which implies that
calling its operator() method requires a virtual call. While
nsQueryReferent is marked `final`, compiler inlining decisions make it
impossible to de-virtualize the call to operator(). However, we have
many other classes returned by do_* functions that nsCOMPtr handles
directly, requiring no extra virtual calls, and we can give
nsQueryReferent the same treatment.
There's no reason for them to be separate, and we can use the |kind| field to
distinguish the two kinds when necessary.
This lets us remove the duplication of ScriptableToString(), ToUTF8String(),
and ScriptableEquals().
It also lets us use |Atom*| pointers instead of |nsIAtom*| pointers in various
places within nsAtomTable.cpp, which de-virtualizes various calls and removes
the need for some static_casts.
--HG--
extra : rebase_source : 2f9183323446e353f8cc5dcedf57d9dc9a38f0a7
XPConnect wrapper overhead for this interface has been showing up heavily in a
lot of my profiles, in some places accounting for 50ms of the 80ms we spend
getting getting <browser> messageManagers. This improves the situation
considerably.
MozReview-Commit-ID: 9d1hCORxsYG
--HG--
rename : dom/base/nsIFrameLoader.idl => dom/webidl/FrameLoader.webidl
extra : rebase_source : d8a1fc1a19632ba36a9fc6f63873f7534671a13b
This is straightforward, with only two notable things.
- `#include "nsXPIDLString.h" is replaced with `#include "nsString.h"`
throughout, because all nsXPIDLString.h did was include nsString.h. The
exception is for files which already include nsString.h, in which case the
patch just removes the nsXPIDLString.h inclusion.
- The patch removes the |xpidl_string| gtest, but improves the |voided| test to
cover some of its ground, e.g. testing Adopt(nullptr).
--HG--
extra : rebase_source : 452cc4a08046a1adb1a8099a7e85a1917de5add8
We should not be declaring forward declarations for nsString classes directly,
instead we should use nsStringFwd.h. This will make changing the underlying
types easier.
--HG--
extra : rebase_source : b2c7554e8632f078167ff2f609392e63a136c299
These are all easy cases where an nsXPIDLCString local variable is set via
getter_Copies() and then is null checked. The patch uses IsVoid() to replace
the null checks (and get() and EqualsLiteral() calls to replace any implicit
conversions).
--HG--
extra : rebase_source : 484ad42a7816b34b86afbe072e04ba131c1619c6
The class, in practice, was already doing nothing in that case, but with
it being half #ifdef'ed on EARLY_BETA_OR_EARLIER, that led to build
failures for unused code on late beta. Just remove the class completely
on late beta.
--HG--
extra : rebase_source : 801b0b9bb9faf41270f72f616e720d5725e8b25c