We do this for now because the Ion fast paths assume things about whether slots
are fixed or not, and how reserved slot indices map to fixed slot indices, that
are not true for proxies, because they have an extra reserved slot.
Remove the forbiddenURI pref which was removed in bug 1274893 as well
as browser.safebrowsing.enabled which got renamed in bug 1025965.
Set dummy URLs for all of the network endpoints.
MozReview-Commit-ID: Efk2fv6cC3g
--HG--
extra : rebase_source : 9fbb3eb0fa7f002fe24577a8a0870ec4d1b7cf31
Add support for asm.js global variables with the following types:
Int8x16, Bool8x16, Int16x8, Bool16x8.
We already have the needed code generation support, but tests were
missing for these types, and so their types were omitted from some
critical switches.
MozReview-Commit-ID: B4r7VofjlYL
--HG--
extra : rebase_source : 4df72f2296f814a1ea83d6ff93170ed2049f4361
Currently, the final slice of an incremental GC only gets a GC_CYCLE_END callback, not a GC_SLICE_END callback. So if you are doing anything that expects to see all of the slices, you will be missing one.
Simplify the setup so that every GC is bracketed with CYCLE_BEGIN/END, and every slice is bracketed with SLICE_BEGIN/END, treating a nonincremental as a GC with a single slice (which it is for everything else.)
--HG--
extra : rebase_source : 8e21300819d517b3e35de14930f53b3ab737a44e
Fix an unused variable warning for `visitor` because it's only used in
the assertion macro.
Fix several no-return-value errors because the compiler cannot assume
the VIXL_UNREACHABLE() macro is actually unreachable.
r=me for trivial patch.
MozReview-Commit-ID: 13IlMyUsXUN
This makes the code a little nicer to read, and means there will be less code churn
if we later add back the ability to share globals.
The holder also gets changed to an actual JS object.
mLoaderGlobal is always null, but the simplification for that will be
made in a later patch.
MozReview-Commit-ID: 7Qg7JSgIxxm
--HG--
extra : rebase_source : 204339b998501c96af35b407ba672a11204956dc
The name obj is not very descriptive, and a later patch will add
another object to this method, so rename it to thisObj. thisObj is the
object we set properties on for a particular .jsm. With global
sharing, it will be different than the global.
MozReview-Commit-ID: 9TPqdbXKYXO
--HG--
extra : rebase_source : c1692bb4f2274f957e602d0e469a544ae9a97e6a
Thanks to the previous patch, we never set any of the function
arguments, so they can be removed, and various code for running
functions can be deleted.
MozReview-Commit-ID: BTIIyDtBPMR
--HG--
extra : rebase_source : 944adf3ac8f1579639e631a478fc286e980972ab
FindTargetObject in DoLoadSubScriptWithOptions will always return a
global object, so the boolean values we pass around to determine if a
global was being reused will always be false. The next patch will
eliminate the various |function| arguments that are now unused.
MozReview-Commit-ID: GvPNFGluRub
--HG--
extra : rebase_source : 76a67f5523153a37942c2730388e62125ecf4390
With the changes in part 1, the local variable |function| is always
null in ObjectForLocation, which lets me remove a lot of dead
code. Additionally, in the second half of the function we know that
|script| is always non-null, because we return with a failure before
then if it isn't null.
In addition, with these changes WriteCachedFunction is no longer
called, so I removed it.
MozReview-Commit-ID: 60hEPi8S3H4
--HG--
extra : rebase_source : 69ea6110fb85d02d2f1c72c5675014aeebb2dee5
These functions are only needed for B2G-style global sharing, which is
based getting |this| as the special JSM object by compiling .jsms are
functions.
I left in place the comments relating this sharing because we may
still reuse that machinery.
MozReview-Commit-ID: IBBW5P70TQm
--HG--
extra : rebase_source : 60cdf549db737ab8dac81c7f48ff7287639a851d
Removing support for this preference means that mReuseLoaderGlobal
will always be false, which lets us eliminate that field, and remove a
lot of code.
This also means that false is always passed to
PrepareObjectForLocation for aReuseLoaderGlobal.
ReadCachedFunction is no longer used, so it is deleted.
MozReview-Commit-ID: 5JD24EYVcQf
--HG--
extra : rebase_source : 3b23b4b8d2b1a2f6a53223477afb4cb13b8a671c
Right now, it is a macro, which causes a warning with clang about
extraneous parentheses. Turning it into a function fixes that and is
also nicer.
MozReview-Commit-ID: KTPA9b6oeUu
--HG--
extra : rebase_source : ae063db5a4b5b14bc4a3a8f64adbbecfc897edd9
This is mostly a mimic of what we do in
GeckoCSSAnimationBuilder::FillInMissingKeyframeValues().
In Gecko we iterate over the properties just once because we can take the
index for both the synthesized start and end keyframe and easily look them up
as needed. However, in this patch we synthesize the start and end keyframes
separately and iterate over the properties twice because that's easier than
getting two indices and then later calling another FFI to dereference each of
them, and neater than getting back two pointers
MozReview-Commit-ID: 1e0R9AKzgaG
--HG--
extra : rebase_source : a37c406480c2d0ce2b8c4d4ad804622cac2083fa
This patch implement several things:
1. Implement a new constructor for nsStyleImageRequest to receive an existing
ImageValue from the caller.
2. Implement Gecko_ImageValue_Create to allow stylo to create a gecko::ImageValue
object.
3. Implement Gecko_SetXXXXImageValue to allow stylo to pass a created ImageValue
back to gecko.
MozReview-Commit-ID: 1SbW5w8FSV2
--HG--
extra : source : 63777ecf4c7138a0ce5847753a41efcbfc8e2b20
extra : intermediate-source : dced488c119da7e3ae27c903c0dcc76593d8a06d
WrapperMap::Enum::goToNext() is listed as a GC function because the *generated*
stack calls BrowserCompartmentMatcher::match() which leads a path to a FieldCall
of nsISupports.QueryInterface, however it is never the case. Add
nsContentUtils::IsExpandedPrincipal() to the ignore functions because it is more
narrowly scoped and therefore less likely to hide problems.
MozReview-Commit-ID: ACwkMtRiQk2
--HG--
extra : rebase_source : 2cadcc9f59096b4e5f693f39e2cab93da048f949
The wrappers for strings have target compartment nullptr, which are stored
separately with the other wrappers, so we can simply skip or target them.
MozReview-Commit-ID: CEgU3q7cnmB
--HG--
extra : rebase_source : 10216b58189aa7c9878dc02596e67a296f7bfbdc
Changed the type of argument |targetFilter| of NukeCrossCompartmentWrappers()
from CompartmentFilter to JSCompartment* because it is always a single target
compartment, and we can optimize the iteration not to iterate the outer map.
MozReview-Commit-ID: 7cDCgJI0H9z
--HG--
extra : rebase_source : 4973dfd4c3326bf48b78088979962e425e35030c
Now we use a flat hashmap to store CCWs, which when we want to nuke wrappers to
a target compartment, we need to iterate every single one to find them. But if
we use a 2d hashmap to store CCWs by their target compartment, we can walk
through only the wrappers that we're targeting.
MozReview-Commit-ID: 8h6wO6NLkD9
--HG--
extra : rebase_source : f6550be07979d94ba937aa5f6ac629a98df7aa44
This FFI is used by Servo_AnimationValue_GetTransform(), which needs to
handle and return none transform properly.
MozReview-Commit-ID: 49cFXE2BIbm
--HG--
extra : rebase_source : 9def5e92dc6c0b60c2fb412228a50d7e2f5eb722
We need to traverse rule tree to get the important rules, so we will not
override them if they have animations running on compositor.
MozReview-Commit-ID: 67NO2nIcUfq
--HG--
extra : rebase_source : 24a4ea4ca10e00f409d94c81acacb3db72248b3f
While we're here, also align their buckets and give them bug_numbers fields.
To change buckets for GC_MAX_PAUSE_MS we need to rename to _2. Because of
GC_MAX_PAUSE_MS' use as a historical perf metric, we need to leave it in-place
for now.
MozReview-Commit-ID: Cffo3q1pR5E
--HG--
extra : rebase_source : 9843ca718dea8694be8029530a0f8646352c5b37
WrapperMap::Enum::goToNext() is listed as a GC function because the *generated*
stack calls BrowserCompartmentMatcher::match() which leads a path to a FieldCall
of nsISupports.QueryInterface, however it is never the case. Add
nsContentUtils::IsExpandedPrincipal() to the ignore functions because it is more
narrowly scoped and therefore less likely to hide problems.
MozReview-Commit-ID: ACwkMtRiQk2
--HG--
extra : rebase_source : f4222495676f80cc74a5d38e9440cdc7a9ec0791
extra : source : 9a553b9dc92286434644838c41e8513f943f2c25
The wrappers for strings have target compartment nullptr, which are stored
separately with the other wrappers, so we can simply skip or target them.
MozReview-Commit-ID: CEgU3q7cnmB
--HG--
extra : rebase_source : d93752bf1d10c0f2dd4453ec6f96ee718b65e224
extra : source : ac4a48a831ce289295ca989fc5119611d8560ec1
Changed the type of argument |targetFilter| of NukeCrossCompartmentWrappers()
from CompartmentFilter to JSCompartment* because it is always a single target
compartment, and we can optimize the iteration not to iterate the outer map.
MozReview-Commit-ID: 7cDCgJI0H9z
--HG--
extra : rebase_source : ee9341168a28b5e6f273c512b0562ee4ddc297bc
extra : source : e80197b115673f259293d112da61c8dd9edc121e
Now we use a flat hashmap to store CCWs, which when we want to nuke wrappers to
a target compartment, we need to iterate every single one to find them. But if
we use a 2d hashmap to store CCWs by their target compartment, we can walk
through only the wrappers that we're targeting.
MozReview-Commit-ID: 8h6wO6NLkD9
--HG--
extra : rebase_source : 1bdbf3642e86ee25999fc7a4d2bec062d2efaac0
extra : source : 7e8f428a3edf506fc53bda26eacc2b64641f8346
This avoids some known hazard from replace-malloc itself, and unhides
--disable-replace-malloc hazards if there are any (and there is one from
bug 1361258), which wouldn't be caught until riding trains
(replace-malloc being only enabled on nightly).
The hazard from bug 1361258 that disappears is this one:
Error: Indirect call malloc_hook_table_t.jemalloc_thread_local_arena_hook
Location: replace_jemalloc_thread_local_arena @memory/replace/replace/ReplaceMalloc.cpp#261
Stack Trace:
jemalloc_thread_local_arena @ memory/build/replace_malloc.c#287
Gecko_SetJemallocThreadLocalArena @ layout/style/ServoBindings.cpp#2062
The new hazard from that bug is:
Error: Variable assignment jemalloc.c:arenas_map
Location: jemalloc_thread_local_arena @memory/mozjemalloc/jemalloc.c#3068
Stack Trace:
Gecko_SetJemallocThreadLocalArena @ layout/style/ServoBindings.cpp#2048
Where arenas_map is a thread-local variable, so there really is no
hazard.
--HG--
extra : rebase_source : bea3d2f862ede8c0b90775b6ec9cebb657b9b455
The movmskps SSE instruction only transfers 4 bits from the xmm
register. This work for Bool32x4 and Bool64x2 vectors, but it misses
lanes of the Bool16x8 and Bool8x16 types.
Use a pmovmskb SSE2 instruction instead which transfers 16 byte sign
bits from the xmm register. This lets us resolve even Bool8x16 lanes
correctly.
We know that the input vector is a boolean type, so each lane is known
to be either 0 or -1. There is no harm in checking too many bits of the
types with lanes wider than 8 bits. It won't affect the result.
Check the stopAtWindowProxy flag before checking IsWindowProxy(), since the flag
check is cheaper and in performance-sensitive binding code the flag is false.
MozReview-Commit-ID: 8R4tElYBXaI
--HG--
extra : rebase_source : 78d7942c3269b3d0016815a71099b4ec59587c8a
JS code often uses arrays as queues, with a loop to shift() all items, and this resulted in quadratic behavior for us. That kind of code is much faster now.
PromiseObject now has a createSkippingExecutor function that avoids the need for a dummy executor for internally created promises.
MozReview-Commit-ID: IEzNwMYSdde
The shell has a very basic implementation of Promise job queue handling. This patch moves it into the engine, exposed through friendapi functions. The motivation is that I want to write JSAPI tests for streams, which requires Promise handling. The test harness would need essentially a copy of the shell's Promise handling, which isn't nice.
To be clear, the default implementation isn't used automatically: the embedding has to explicitly request it using js::UseInternalJobQueues.
MozReview-Commit-ID: 6bZ5VG5mJKV
TSAN messes up the wasm signal handler on try builders.
--HG--
extra : rebase_source : c161c4eebc1f43daa0eeae13218b47ece13595c4
extra : histedit_source : 2cf2dad71d822fc583da2b5b633f8f21f05fa66a
One jstest was found to run more slowly under tsan, so add it to the cgc-jittest-timeouts.txt bucket.
Several jit-tests expect to timeout, and are annotated with an expected status code. Currently, we have to force tsan to report a zero status if it finds an error, since otherwise it will cause lots of tests to fail (due to hitting a tsan-detectable problem.) But those zero exit statuses cause the test to fail. Add --unusable-error-status to treat those as passing.
--HG--
extra : rebase_source : 37e9b863ecb6929da0de2a2d947ec31f6ba15d78
extra : histedit_source : c33dacaa0ecc117f17b57cf0dc5ba6c5b775d6ce
The shell has a very basic implementation of Promise job queue handling. This patch moves it into the engine, exposed through friendapi functions. The motivation is that I want to write JSAPI tests for streams, which requires Promise handling. The test harness would need essentially a copy of the shell's Promise handling, which isn't nice.
To be clear, the default implementation isn't used automatically: the embedding has to explicitly request it using js::UseInternalJobQueues.
MozReview-Commit-ID: DwtPsJ0uMtP
This annotates vsprintf-like functions with MOZ_FORMAT_PRINTF. This may
provide some minimal checking of such calls (the GCC docs say that it
checks for the string for "consistency"); but in any case shouldn't
hurt.
MozReview-Commit-ID: HgnAK1LiorE
--HG--
extra : rebase_source : 9c8d715d6560f89078c26ba3934e52a2b5778b6a
FlowGraphSummary walks the bytecode linearly, assuming that a branch
instruction will always be visited before the branch's target. However,
this is not the case for JSOP_LOOPHEAD, leading to an incorrect line
number (-1). This patch changes it to instead reuse the location of the
previous opcode, which is correct in the case of a loop head.
MozReview-Commit-ID: 5OmLmSk2uSn
--HG--
extra : rebase_source : fb773071855bb481747469833ec820ef202d1205
Flushing the cache at startup is already handled automatically by the
AppStartup code, which removes the entire startupCache directory when
necessary. The add-on manager requires being able to flush the cache at
runtime, though, for the sake of updating bootstrapped add-ons.
MozReview-Commit-ID: LIdiNHrXYXu
--HG--
extra : source : 8f4637881ddc42a948c894e62c8486fe8677a938
extra : histedit_source : e69395a2b87b2b0edb394686ed6ee24731ba9fb8
One of the things that I've noticed in profiling startup overhead is that,
even with the startup cache, we spend about 130ms just loading and decoding
scripts from the startup cache on my machine.
I think we should be able to do better than that by doing some of that work in
the background for scripts that we know we'll need during startup. With this
change, we seem to consistently save about 3-5% on non-e10s startup overhead
on talos. But there's a lot of room for tuning, and I think we get some
considerable improvement with a few ongoing tweeks.
Some notes about the approach:
- Setting up the off-thread compile is fairly expensive, since we need to
create a global object, and a lot of its built-in prototype objects for each
compile. So in order for there to be a performance improvement for OMT
compiles, the script has to be pretty large. Right now, the tipping point
seems to be about 20K.
There's currently no easy way to improve the per-compile setup overhead, but
we should be able to combine the off-thread compiles for multiple smaller
scripts into a single operation without any additional per-script overhead.
- The time we spend setting up scripts for OMT compile is almost entirely
CPU-bound. That means that we have a chunk of about 20-50ms where we can
safely schedule thread-safe IO work during early startup, so if we schedule
some of our current synchronous IO operations on background threads during the
script cache setup, we basically get them for free, and can probably increase
the number of scripts we compile in the background.
- I went with an uncompressed mmap of the raw XDR data for a storage format.
That currently occupies about 5MB of disk space. Gzipped, it's ~1.2MB, so
compressing it might save some startup disk IO, but keeping it uncompressed
simplifies a lot of the OMT and even main thread decoding process, but, more
importantly:
- We currently don't use the startup cache in content processes, for a variety
of reasons. However, with this approach, I think we can safely store the
cached script data from a content process before we load any untrusted code
into it, and then share mmapped startup cache data between all content
processes. That should speed up content process startup *a lot*, and very
likely save memory, too. And:
- If we're especially concerned about saving per-process memory, and we keep
the cache data mapped for the lifetime of the JS runtime, I think that with
some effort we can probably share the static string data from scripts between
content processes, without any copying. Right now, it looks like for the main
process, there's about 1.5MB of string-ish data in the XDR dumps. It's
probably less for content processes, but if we could save .5MB per process
this way, it might make it easier to increase the number of content processes
we allow.
MozReview-Commit-ID: CVJahyNktKB
--HG--
extra : source : 1c7df945505930d2d86a076ee20807104324c8cc
extra : histedit_source : 75e193839edf727874f01b2a9f6852f6c1f087fb%2C3ce966d7dcf2bd0454a7d673d0467097456bd782
When decoding off-thread, we can't safely access the JS runtime to get the
current JS version, and doing so causes failed assertions.
MozReview-Commit-ID: Lra437aa8SM
--HG--
extra : source : 16259c1af36e138881d18a3f8b0a803f5d4fc3ec
Flushing the cache at startup is already handled automatically by the
AppStartup code, which removes the entire startupCache directory when
necessary. The add-on manager requires being able to flush the cache at
runtime, though, for the sake of updating bootstrapped add-ons.
MozReview-Commit-ID: LIdiNHrXYXu
--HG--
extra : rebase_source : e5b16490f47e20c78d081ad03dec02c6b2874fc3
extra : absorb_source : 6cd94504c8247f375161b2afdca5c61d59cf8f01
One of the things that I've noticed in profiling startup overhead is that,
even with the startup cache, we spend about 130ms just loading and decoding
scripts from the startup cache on my machine.
I think we should be able to do better than that by doing some of that work in
the background for scripts that we know we'll need during startup. With this
change, we seem to consistently save about 3-5% on non-e10s startup overhead
on talos. But there's a lot of room for tuning, and I think we get some
considerable improvement with a few ongoing tweeks.
Some notes about the approach:
- Setting up the off-thread compile is fairly expensive, since we need to
create a global object, and a lot of its built-in prototype objects for each
compile. So in order for there to be a performance improvement for OMT
compiles, the script has to be pretty large. Right now, the tipping point
seems to be about 20K.
There's currently no easy way to improve the per-compile setup overhead, but
we should be able to combine the off-thread compiles for multiple smaller
scripts into a single operation without any additional per-script overhead.
- The time we spend setting up scripts for OMT compile is almost entirely
CPU-bound. That means that we have a chunk of about 20-50ms where we can
safely schedule thread-safe IO work during early startup, so if we schedule
some of our current synchronous IO operations on background threads during the
script cache setup, we basically get them for free, and can probably increase
the number of scripts we compile in the background.
- I went with an uncompressed mmap of the raw XDR data for a storage format.
That currently occupies about 5MB of disk space. Gzipped, it's ~1.2MB, so
compressing it might save some startup disk IO, but keeping it uncompressed
simplifies a lot of the OMT and even main thread decoding process, but, more
importantly:
- We currently don't use the startup cache in content processes, for a variety
of reasons. However, with this approach, I think we can safely store the
cached script data from a content process before we load any untrusted code
into it, and then share mmapped startup cache data between all content
processes. That should speed up content process startup *a lot*, and very
likely save memory, too. And:
- If we're especially concerned about saving per-process memory, and we keep
the cache data mapped for the lifetime of the JS runtime, I think that with
some effort we can probably share the static string data from scripts between
content processes, without any copying. Right now, it looks like for the main
process, there's about 1.5MB of string-ish data in the XDR dumps. It's
probably less for content processes, but if we could save .5MB per process
this way, it might make it easier to increase the number of content processes
we allow.
MozReview-Commit-ID: CVJahyNktKB
--HG--
extra : rebase_source : 2ec24c8b0000f9187a9bf4a096ee8d93403d7ab2
extra : absorb_source : bb9d799d664a03941447a294ac43c54f334ef6f5
When decoding off-thread, we can't safely access the JS runtime to get the
current JS version, and doing so causes failed assertions.
MozReview-Commit-ID: Lra437aa8SM
--HG--
extra : rebase_source : 268fc90f390cf6436f3e8a1368a62fdf274d6f8d
The algorithm:
1. Keep a list of all sloppy functions-in-block FunctionBoxes on the
innermost scope.
2. When the scope exits, we'll know all its declared names. Check for possible
early errors for declaring the FiBs as vars. If no early error would occur,
a. If the innermost scope is the var scope, declare the synthesized
var and mark the FunctionBox as an Annex B function.
b. Otherwise, add the FunctionBox to the enclosing scope's list of
sloppy FiBs.