This change adds some functions that should be used instead of PR_GetCurrentThread. They both return a PRThread. GetCurrentPhysicalThread does exactly the same thing as PR_GetCurrentThread, but it makes it clearer what you're getting when there are cooperatively scheduled threads running.
GetCurrentVirtualThread returns the same value for all threads in a cooperative thread pool. The actual value it returns is somewhat immaterial.
MozReview-Commit-ID: 4lFwsF2NuzC
A template won't be instatiated until used. We make it a compile error
to call Then() on ThenCommand<false> to disallow promise chaining.
MozReview-Commit-ID: BUCFgfX4FTJ
--HG--
extra : rebase_source : 0b46b9d10a9b7dec3f15ad4596d9726365f6f859
extra : source : 5676128a7a3ee68bf9eee01bbdf8b983117d5655
This annotates vsprintf-like functions with MOZ_FORMAT_PRINTF. This may
provide some minimal checking of such calls (the GCC docs say that it
checks for the string for "consistency"); but in any case shouldn't
hurt.
MozReview-Commit-ID: HgnAK1LiorE
--HG--
extra : rebase_source : 9c8d715d6560f89078c26ba3934e52a2b5778b6a
The OrInsert() method adds the new entry to the hashtable if needed, and
returns the newly added entry or the pre-existing one. It allows for a
more concise syntax at the call site.
One of the things that I've noticed in profiling startup overhead is that,
even with the startup cache, we spend about 130ms just loading and decoding
scripts from the startup cache on my machine.
I think we should be able to do better than that by doing some of that work in
the background for scripts that we know we'll need during startup. With this
change, we seem to consistently save about 3-5% on non-e10s startup overhead
on talos. But there's a lot of room for tuning, and I think we get some
considerable improvement with a few ongoing tweeks.
Some notes about the approach:
- Setting up the off-thread compile is fairly expensive, since we need to
create a global object, and a lot of its built-in prototype objects for each
compile. So in order for there to be a performance improvement for OMT
compiles, the script has to be pretty large. Right now, the tipping point
seems to be about 20K.
There's currently no easy way to improve the per-compile setup overhead, but
we should be able to combine the off-thread compiles for multiple smaller
scripts into a single operation without any additional per-script overhead.
- The time we spend setting up scripts for OMT compile is almost entirely
CPU-bound. That means that we have a chunk of about 20-50ms where we can
safely schedule thread-safe IO work during early startup, so if we schedule
some of our current synchronous IO operations on background threads during the
script cache setup, we basically get them for free, and can probably increase
the number of scripts we compile in the background.
- I went with an uncompressed mmap of the raw XDR data for a storage format.
That currently occupies about 5MB of disk space. Gzipped, it's ~1.2MB, so
compressing it might save some startup disk IO, but keeping it uncompressed
simplifies a lot of the OMT and even main thread decoding process, but, more
importantly:
- We currently don't use the startup cache in content processes, for a variety
of reasons. However, with this approach, I think we can safely store the
cached script data from a content process before we load any untrusted code
into it, and then share mmapped startup cache data between all content
processes. That should speed up content process startup *a lot*, and very
likely save memory, too. And:
- If we're especially concerned about saving per-process memory, and we keep
the cache data mapped for the lifetime of the JS runtime, I think that with
some effort we can probably share the static string data from scripts between
content processes, without any copying. Right now, it looks like for the main
process, there's about 1.5MB of string-ish data in the XDR dumps. It's
probably less for content processes, but if we could save .5MB per process
this way, it might make it easier to increase the number of content processes
we allow.
MozReview-Commit-ID: CVJahyNktKB
--HG--
extra : source : 1c7df945505930d2d86a076ee20807104324c8cc
extra : histedit_source : 75e193839edf727874f01b2a9f6852f6c1f087fb%2C3ce966d7dcf2bd0454a7d673d0467097456bd782
One of the things that I've noticed in profiling startup overhead is that,
even with the startup cache, we spend about 130ms just loading and decoding
scripts from the startup cache on my machine.
I think we should be able to do better than that by doing some of that work in
the background for scripts that we know we'll need during startup. With this
change, we seem to consistently save about 3-5% on non-e10s startup overhead
on talos. But there's a lot of room for tuning, and I think we get some
considerable improvement with a few ongoing tweeks.
Some notes about the approach:
- Setting up the off-thread compile is fairly expensive, since we need to
create a global object, and a lot of its built-in prototype objects for each
compile. So in order for there to be a performance improvement for OMT
compiles, the script has to be pretty large. Right now, the tipping point
seems to be about 20K.
There's currently no easy way to improve the per-compile setup overhead, but
we should be able to combine the off-thread compiles for multiple smaller
scripts into a single operation without any additional per-script overhead.
- The time we spend setting up scripts for OMT compile is almost entirely
CPU-bound. That means that we have a chunk of about 20-50ms where we can
safely schedule thread-safe IO work during early startup, so if we schedule
some of our current synchronous IO operations on background threads during the
script cache setup, we basically get them for free, and can probably increase
the number of scripts we compile in the background.
- I went with an uncompressed mmap of the raw XDR data for a storage format.
That currently occupies about 5MB of disk space. Gzipped, it's ~1.2MB, so
compressing it might save some startup disk IO, but keeping it uncompressed
simplifies a lot of the OMT and even main thread decoding process, but, more
importantly:
- We currently don't use the startup cache in content processes, for a variety
of reasons. However, with this approach, I think we can safely store the
cached script data from a content process before we load any untrusted code
into it, and then share mmapped startup cache data between all content
processes. That should speed up content process startup *a lot*, and very
likely save memory, too. And:
- If we're especially concerned about saving per-process memory, and we keep
the cache data mapped for the lifetime of the JS runtime, I think that with
some effort we can probably share the static string data from scripts between
content processes, without any copying. Right now, it looks like for the main
process, there's about 1.5MB of string-ish data in the XDR dumps. It's
probably less for content processes, but if we could save .5MB per process
this way, it might make it easier to increase the number of content processes
we allow.
MozReview-Commit-ID: CVJahyNktKB
--HG--
extra : rebase_source : 2ec24c8b0000f9187a9bf4a096ee8d93403d7ab2
extra : absorb_source : bb9d799d664a03941447a294ac43c54f334ef6f5
The lossy conversion to ASCII here can also fail; we should handle that
as well. Rewriting the code to use MakeScopeExit also avoids tangled
logic and/or duplicating calls to ensure the destination string is
truncated on failure.
PLDHashTable takes the result of the hash function and multiplies it by
kGoldenRatio to ensure that it has a good distribution of bits across
the 32-bit hash value, and then zeroes out the low bit so that it can be
used for the collision flag. This result is called hash0. From hash0
it computes two different numbers used to find entries in the table
storage: hash1 is used to find an initial position in the table to
begin searching for an entry; hash2 is then used to repeatedly offset
that position (mod the size of the table) to build a chain of positions
to search.
In a table with capacity 2^c entries, hash1 is simply the upper c bits
of hash0. This patch does not change this.
Prior to this patch, hash2 was the c bits below hash1, padded at the low
end with zeroes when c > 16. (Note that bug 927705, changeset
1a02bec165e16f370cace3da21bb2b377a0a7242, increased the maximum capacity
from 2^23 to 2^26 since 2^23 was sometimes insufficient!) This manner
of computing hash2 is problematic because it increases the risk of long
chains for very large tables, since there is less variation in the hash2
result due to the zero padding.
So this patch changes the hash2 computation by using the low bits of
hash0 instead of shifting it around, thus avoiding 0 bits in parts of
the hash2 value that are significant.
Note that this changes what hash2 is in all cases except when the table
capacity is exactly 2^16, so it does change our hashing characteristics.
For tables with capacity less than 2^16, it should be using a different
second hash, but with the same amount of random-ish data. For tables
with capacity greater than 2^16, it should be using more random-ish
data.
Note that this patch depends on the patch for bug 1353458 in order to
avoid causing test failures.
MozReview-Commit-ID: JvnxAMBY711
--HG--
extra : rebase_source : dbde93b9312dd10fb3a549ee4098b57e1df5cadf
See also bug 1361495 - PR_SI_HOSTNAME is implemented in NSPR on
Windows as initializing winsock which can be janky. There don't seem
to be any users of this property, and it has tracker concerns anyhow -
so remove it.
MozReview-Commit-ID: S2AwzMUgYk
--HG--
extra : rebase_source : 552bed219913f40c002b807be3239d4666a5284b
This possibly incurs an extra function call (depends on exactly how much inling
the compiler does). But it helps enormously with subsequent refactorings,
because PseudoStack (and related types) don't need to be visible in
GeckoProfiler.h, which is exported outside the profiler.
--HG--
extra : rebase_source : f2dc5952d7444dfe12e627e86e6d37632b283107
When we just had CycleCollectedJSContext (and no CycleCollectedJSRuntime) a
weird thing happened at shutdown:
1. We would call JS_DestroyContext from ~CycleCollectedJSContext. By that time,
the ~XPCJSContext destructor had already finished.
2. Destroying the context runs a final GC. That GC would call back into various
GC callbacks, such as TraceBlackJS and TraceGrayJS.
3. These callbacks would do a virtual method call:
http://searchfox.org/mozilla-central/rev/876c7dd30586f9c6f9c99ef7444f2d73c7acfe7c/xpcom/base/CycleCollectedJSRuntime.cpp#791
4. Normally this method call would call into
XPCContext::TraceNativeBlackRoots. However, C++ changes the vtable for an
object during destruction. So we would only call CycleCollectedJSContext's
version of TraceNativeBlackRoots, which is empty. So we never traced anything.
When I moved this code into the runtime, we actually do call into
XPCJSRuntime::TraceNativeBlackRoots at that time. So the behavior changed, and
that was causing crashes once I nulled out the TLS as you asked. So I removed
these callbacks for the last GC.
MozReview-Commit-ID: 3do13bjpwQj
This patch keeps a list of all the cooperatively scheduled contexts that are
linked to a runtime. In places where we need to iterate over all contexts (for
GC, specifically), it iterates over the list.
MozReview-Commit-ID: 3pKJX78f2l0