MozReview-Commit-ID: A1lCqvbQYAF
There is no clean API-based solution to this, so instead I went grovelling
through the DCOM wire protocol and was able to write a function that converts
handler OBJREFs into standard OBJREFs.
See also:
https://msdn.microsoft.com/en-us/library/cc226801
--HG--
extra : rebase_source : a650055c4adda3a1d99262e47f2b463074c6b935
This permits NSS to load libsoftokn3.dylib, even when the rest of the file
system's access is removed (as is the goal for the content sandbox level 3).
This is needed for WebCrypto.
MozReview-Commit-ID: Bh54b87zIjY
--HG--
extra : rebase_source : e1fa59648d683e97a3bc1310ac1c362009f657f8
This adds the RemoteType annotation to a content crash report so that we can
distinguish between content processes that crashed while running remote, local
or extension code. The annotation is passed along the others to Socorro by the
crashreporter and is also whitelisted for inclusion in the crash ping.
MozReview-Commit-ID: 4avo0IWfMGf
--HG--
extra : rebase_source : 8d03f7e166b5762a5ce7cab13c2101302b4f1d2f
If the "security.sandbox.content.level" preference is set to a value less than
1, all consumers will automatically treat it as if it were level 1. On Linux and
Nightly builds, setting the sandbox level to 0 is still allowed, for now.
MozReview-Commit-ID: 9QNTCkdbTfm
--HG--
extra : rebase_source : cd5a853c46a5cd334504b339bef8df30a3cabe51
If the "security.sandbox.content.level" preference is set to a value less than
1, all consumers will automatically treat it as if it were level 1. On Linux and
Nightly builds, setting the sandbox level to 0 is still allowed, for now.
MozReview-Commit-ID: 9QNTCkdbTfm
--HG--
extra : rebase_source : 1a26ffc5b9f80e6df4c37c23f506e907ba44053a
Full Firefox on Linux can now be run with a --headless flag.
This includes seven parts:
1) Running all marionette tests in headless mode.
2) Prevents crashes where Firefox calls into GTK.
3) Adds a headless screen helper which supports changing the headless
screen size with the environment variables MOZ_HEADLESS_WIDTH and
MOZ_HEADLESS_HEIGHT.
4) Supports simulating moving a headless window.
5) Adds a stubbed out nsSound implementation.
6) Supports simulating size mode changes of headless windows.
7) Adds the --headless flag for Firefox.
This permission was needed for the memory bloat logging, which is used for
leaktest, including logging intentionally crashing processes. Now we restrict
ourselves to only allowing writes to the location needed for this logging,
rather than all of /private/var.
MozReview-Commit-ID: 5AbJEZlDHNV
--HG--
extra : rebase_source : 26936b8d8bca53f2c37a195b5e7c69c151ec18d2
Currently the profiler mostly uses an array of strings to represent which
features are available and in use. This patch changes the profiler core to use
a uint32_t bitfield, which is a much simpler and faster representation.
(nsProfiler and the profiler add-on still use the array of strings, alas.) The
new ProfilerFeature type defines the values in the bitfield.
One side-effect of this change is that profiler_feature_active() now can be
used to query all features. Previously it was just a subset.
Another side-effect is that profiler_get_available_features() no longer incorrectly
indicates support for Java and stack-walking when they aren't supported. (The
handling of task tracer support is unchanged, because the old code handled it
correctly.)
The content process stores the incoming initial gfxVars updates, which are
lazily used when the gfxVars are first initialized.
MozReview-Commit-ID: ExUVdr5xGLb
--HG--
extra : rebase_source : fd6f3e1bc4eabdd85447eff0c0fa22537747431f
Remove sync protocol AllocateTabId. Instead we generate tabId in
each process with nsContentUtils::GenerateTabId, and register
RemoteFrameInfo in parent process. If the tab id was generated from
a content process, it's sent parent through either PBrowserConstructor
or PContent::CreateChildProcess.
MozReview-Commit-ID: D3W2fK9eCNH
--HG--
extra : rebase_source : 1913f8f586537be1c82a70a19cc8c6351671d0df
LocaleService serves two main functions. It is a central place for all code in the
engine to learn about locales, but it also does the language negotiation and selection.
The former is relevant in all processes, but the latter should only be performed
by the "main" process. In case of current Desktop Firefox, the parent process
is the one performing all the language negotiation, and content processes should
operate in the "client" mode.
In Fennec, there's a Java app on top of Gecko which should work as a "server"
and then all processes, including parent process of Gecko is merely a "client" for that.
This refactor finalizes this duality making it easily configurable to define in
which mode a given LocaleService operates.
The server-client model allows all clients to stay in sync with the server,
but operate transparently for all callers just returning the right values.
In order to initialize LocaleService in the client mode in child process with the
right locales I'm adding the list of app locales to the XPCOMInitData,
and then fire LocaleService::SetAppLocales in the child process initialization.
In order to keep the list up to date, I'm adding intl:app-locales-changed to
the list of observed topics, and when triggered, I send the updated list
to the child process, which updates LocaleService::SetAppLocales with the new
list.
MozReview-Commit-ID: K9X6berF3IO
--HG--
extra : rebase_source : ca5e502d064023fddfd63fe6fe5eccefce8dee52
IPCBlobInputStream is a new type of nsIInputStream that is used only in content
process when a Blob is sent from parent to child. This inputStream is for now,
just cloneable.
When the parent process sends a Blob to a content process, it has the Blob and
its inputStream. With its inputStream it creates a IPCBlobInputStreamParent
actor. This actor keeps the inputStream alive for following uses (not part of
this patch).
On the child side we will have, of course, a IPCBlobInputStreamChild actor.
This actor is able to create a IPCBlobInputStream when CreateStream() is
called. This means that 1 IPCBlobInputStreamChild can manage multiple
IPCBlobInputStreams each time one of them is cloned. When the last one of this
stream is released, the child actor sends a __delete__ request to the parent
side; the parent will be deleted, and the original inputStream, on the parent
side, will be released as well.
IPCBlobInputStream is a special inputStream because each method, except for
Available() fails. Basically, this inputStream cannot be used on the content
process for nothing else than knowing the size of the original stream.
In the following patches, I'll introduce an async way to use it.
This patch does a few things:
a) Adds the resources location from the .app directory to the read whitelist
b) When it's a non-packaged build, mach run (and various mach tests) set an environment variable for the repo location which we allow reads from.
r=haik,froydnj
MozReview-Commit-ID: KNvAoUs5Ati
--HG--
extra : rebase_source : 81ba8bfee0ca96979cf8e30d75cdd47f06bc10ea
The goal of this patch is to remove the call to the sync IPC
GetCompositorOptions message from TabChild::InitRenderingState. In order
to this, we have InitRenderingState take the CompositorOptions as an
argument instead, and propagate that backwards through the call sites.
Eventually we can propagate it back to a set of already-sync IPC
messages in PCompositorBridge that are used during layers id
registration (NotifyChildCreated, NotifyChildRecreated, etc.). Therefore
this patch effectively piggybacks the CompositorOptions sync IPC onto
these pre-existing sync IPC messages.
The one exception is when we propagate it back to the AdoptChild call.
If this message were sync we could just use it like the others and have
it return a CompositorOptions. However, it is async, so instead we add
another call to GetCompositorOptions here temporarily. This will be
removed in the next patch.
MozReview-Commit-ID: AtdYOuXmHu4
--HG--
extra : rebase_source : 5b80831cf84d3a4b57b2214a12ccf8a896cfa3a7
We add a new "on-off" protocol PURLClassifierLocal which calls
nsIURIClassifier.asyncClassifyLocalWithTables on construction and
calls back on destruction. Pretty much the same design as PURLClassifier.
In order to avoid code duplication, the actor implementation is templatized
and |MaybeInfo| in PURLClassifier.ipdl is moved around.
Test case is included and the custom event target is not in place for labelling.
The custom event target will be done in Bug 1353701.
MozReview-Commit-ID: IdHYgdnBV7S
--HG--
extra : rebase_source : ab1c896305b9f76cab13a92c9bd88c2d356aacb7
MozReview-Commit-ID: GTQF3x1pBtX
A general outline of the COM handler (a.k.a. the "smart proxy"):
COM handlers are pieces of code that are loaded by the COM runtime along with
a proxy and are layered above that proxy. This enables the COM handler to
interpose itself between the caller and the proxy, thus providing the
opportunity for the handler to manipulate an interface's method calls before
those calls reach the proxy.
Handlers are regular COM components that live in DLLs and are declared in the
Windows registry. In order to allow for the specifying of a handler (and an
optional payload to be sent with the proxy), the mscom library allows its
clients to specify an implementation of the IHandlerProvider interface.
IHandlerProvider consists of 5 functions:
* GetHandler returns the CLSID of the component that should be loaded into
the COM client's process. If GetHandler returns a failure code, then no
handler is loaded.
* GetHandlerPayloadSize and WriteHandlerPayload are for obtaining the payload
data. These calls are made on a background thread but need to do their work
on the main thread. We declare the payload struct in IDL. MIDL generates two
functions, IA2Payload_Encode and IA2Payload_Decode, which are used by
mscom::StructToStream to read and write that struct to and from buffers.
* The a11y payload struct also includes an interface, IGeckoBackChannel, that
allows the handler to communicate directly with Gecko. IGeckoBackChannel
currently provides two methods: one to allow the handler to request fresh
cache information, and the other to provide Gecko with its IHandlerControl
interface.
* MarshalAs accepts an IID that specifies the interface that is about to be
proxied. We may want to send a more sophisticated proxy than the one that
is requested. The desired IID is returned by this function. In the case of
a11y interfaces, we should always return IAccessible2_3 if we are asked for
one of its parent interfaces. This allows us to eliminate round trips to
resolve more sophisticated interfaces later on.
* NewInstance, which is needed to ensure that all descendent proxies are also
imbued with the same handler code.
The main focus of this patch is as follows:
1. Provide an implementation of the IHandlerProvider interface;
2. Populate the handler payload (ie, the cache) with data;
3. Modify CreateHolderFromAccessible to specify the HandlerPayload object;
4. Receive the IHandlerControl interface from the handler DLL and move it
into the chrome process.
Some more information about IHandlerControl:
There is one IHandlerControl per handler DLL instance. It is the interface that
we call in Gecko when we need to dispatch an event to the handler. In order to
ensure that events are dispatched in the correct order, we need to dispatch
those events from the chrome main thread so that they occur in sequential order
with calls to NotifyWinEvent.
--HG--
extra : rebase_source : acb44dead7cc5488424720e1bf58862b7b30374f
This is the most important part of the patch series. It removes the
PScreenManager protocol and use ScreenManager directly in the content
processes.
Initial and subsequent updates are sent via PContent::RefreshScreens.
struct ScreenDetails are kept to serialize Screen over IPC.
nsIScreenManager::ScreenForNativeWidget is removed because
nsIWidget::GetWidgetScreen can replace it. nsIScreen::GetId is removed
because it's not useful for the more general Screen class.
MozReview-Commit-ID: 5dJO3isgBuQ
--HG--
extra : rebase_source : 06aa4e4fd56e2b2af1e7483aee7c0cc7f35bdb97
It's not used anywhere. Remove it will make removing PScreenManager
easier.
MozReview-Commit-ID: 5dn8kDhTZVl
--HG--
extra : rebase_source : 96b8ddb18deee94ca256bfa118b60ceacfd2d677
These APIs are intended to use the mechanism defined in Part 1.
Part 3 implements the usage of these APIs to synchronize permissions.
MozReview-Commit-ID: HNKyDPtoaHl
CLOSED TREE
Backed out changeset d24fa1b4553b (bug 1346987)
Backed out changeset 34701b9ed4ba (bug 1346987)
Backed out changeset f24f4fdc5cc8 (bug 1346987)
These APIs are intended to use the mechanism defined in Part 1.
Part 3 implements the usage of these APIs to synchronize permissions.
MozReview-Commit-ID: HNKyDPtoaHl
If Firefox is updated while it is running, the content process can end
up being a different version than the parent process. This can cause
odd crashes, that will happen repeatedly until the user restarts
Firefox. To handle this better, this patch adds a special build ID
message that is sent early in content process startup. The parent
process intentionally crashes if the build ID for the child process
does not match that of the parent process.
MozReview-Commit-ID: 7D3ggkaLxNS
--HG--
extra : rebase_source : 1f8d917ce01919524f949dd5bedfbbbd557f7ed3
Instead of initializing DataStorage objects on demand in the content
process, we initialize them at content process startup by getting the
parent to send down the information about the existing DataStorages at
child process startup. After that point, the dynamic change
notifications added in bug 1215723 will take care of keeping the
information in sync.
The old code would do the content process portion of the open by
immediately sending a message back to the content process, but this
has some weird issues with nesting and priorities. Instead of doing
that, I return the endpoint for the content process back to the
original sync call. This requires more code changes, to thread the
endpoint along, but it is conceptually simpler.
Once I removed the bridges and got it working, I was just able to
remove the spawns from the IPDL file and it worked.
MozReview-Commit-ID: 1tfiJrV4jbV
--HG--
extra : rebase_source : 1ce0012d3f51b0cdebb1954cf473811a9d6c47a7
Update MacSandboxInfo struct to include file system read flag and remove
filesytem read restrictions from the file content process sandbox.
MozReview-Commit-ID: B9LPocvb0W3
--HG--
extra : rebase_source : 7c80335c28dbdb7146d2ad0b447959db5e06cf0f
Instead of an opens, the child sends a new message InitBackground to
the parent to create the parent side.
Most of this is threading around the endpoints instead of the
transport stuff.
MozReview-Commit-ID: 2c5SrCEAGyY
--HG--
extra : rebase_source : 1ee3d6631c5a7755d8e43342932ab16d9da161cd
Assigns the preference security.sandbox.logging.enabled and the environment variable MOZ_SANDBOX_LOGGING to control whether or not sandbox violations are logged. The pref defaults to true. On Linux, only the environment variable is considered.
--HG--
extra : rebase_source : f67870a74795228548b290aec32d08552c068874
Turns on sandbox denial logging if security.sandbox.logging.enabled is true.
Removes most sandbox violation messages but some related messages generated
by other processes will still get through.
--HG--
extra : rebase_source : 4f06e70d53b0f500cc85a869c5bd7f8ea20d8341
Every new PBrowser, whether it's created by the parent or the child, needs
to get a TabGroup assigned to it. That way IPC messages for the PBrowser will
be dispatched to that TabGroup.
For new PBrowsers created by the child, we just create a new TabGroup or reuse
the opener's TabGroup.
For PBrowsers created by the parent, the child needs to intercept the
PBrowserConstructor message and assign a TabGroup immediately. PBrowsers created
by the parent never have an opener so we can always create a new TabGroup.
In both cases, the nsGlobalWindow::TabGroupOuter logic needs to be updated to
read the TabGroup out of the IPC code. Otherwise the DOM and IPC code will get
out of sync about TabGroups.
MozReview-Commit-ID: D5iEdgirfvK
This patch removes support for mozapp iframes, leaving support for
mozbrowser iframes intact. Some of the code has been rewritten in order
to phrase things in terms of mozbrowser only, as opposed to mozbrowser
or app. In some places, code that was only useful with apps has been
completely removed, so that the APIs consumed can also be removed. In
some places where the notion of appId was bleeding out of this API, now
we use NO_APP_ID. Other notions of appId which were restricted to this
API have been removed.
We will use the new type for the generated IPDL message handler
prototype to make sure correct error handling method is called.
MozReview-Commit-ID: AzVbApxFGZ0
This ensures that when requests for keysystem access in the content process
retry, they do so on an up-to-date set of capabilities.
MozReview-Commit-ID: JxmlZnFhKYs
--HG--
extra : rebase_source : 6e02777be6a0692c7e157d3ab0a1952c3017c208
In order to avoid doing a synchronous call from content process to chrome
process in order to determine what GMPs are usable, maintain a cache of GMP
capabilities in the content processes.
We must seed the cache when content processes are created, as the GMP service
is started up and GMPs are added to it before the first (or any subsequent)
content process is created.
MozReview-Commit-ID: Eb4Pu81XHmn
--HG--
extra : rebase_source : ef5de4dd17ee337ca378569691e55d4cfb7939ef
Expose requestIdleCallback on Window and implement running callbacks
in idle periods by posting rICs to the main threads idle queue.
MozReview-Commit-ID: KSYQsyaZ6is
--HG--
extra : rebase_source : 6abd41c2de96b39004f1b2c3c740e81de570970c