With this change, we check for ResponseWouldVary() to make sure that we're talking about the same entity as the caller before we worry about whether the caller passed LOAD_FROM_CACHE and such. If what we have cached is just fundamentally different from what the caller wants, we don't want to return it.
We now store two independent locations for an omni.jar, allowing GRE/XRE and
XUL application to each have their own omni.jar. And since xulrunner setups
are very independent from the XUL applications, we implement support for both
omni.jar and non omni.jar cases in the same runtime, with the side effect of
allowing to switch from one to the other manually without rebuilding the
binaries.
We let the mozilla::Omnijar API handle both cases, so that callers don't need
too much work to support them.
We also make the preferences service load the same set of preferences in all
the various cases (unified vs. separate, omni.jar vs. no omni.jar).
The child process launcher for IPC is modified to pass the base directories
needed for the mozilla::Omnijar API initialization in the child process.
Finally, the startupcache file name canonicalization is modified to separate
APP and GRE resources.
We now store two independent locations for an omni.jar, allowing GRE/XRE and
XUL application to each have their own omni.jar. And since xulrunner setups
are very independent from the XUL applications, we implement support for both
omni.jar and non omni.jar cases in the same runtime, with the side effect of
allowing to switch from one to the other manually without rebuilding the
binaries.
We let the mozilla::Omnijar API handle both cases, so that callers don't need
too much work to support them.
We also make the preferences service load the same set of preferences in all
the various cases (unified vs. separate, omni.jar vs. no omni.jar).
The child process launcher for IPC is modified to pass the base directories
needed for the mozilla::Omnijar API initialization in the child process.
Finally, the startupcache file name canonicalization is modified to separate
APP and GRE resources.
raise sockettransportservice max sockets to 550 from 50 for linux, os
x, and windows >= xp. This does not change the default http
max-connections config (which remains at 30), but does allow
configurations above 50 to work and will enhance the utility of other
systems that use the sockettransportservice.
win9x provides a small number of sockets (100), so we just leave the
limits unchanged there out of conservatism.
--HG--
extra : rebase_source : 9d7a4b5a9112e17144fb510e3d8eb188919e5bf4
Bug 645263, part 0: Count sync primitive ctor/dtors. r=dbaron
Bug 645263, part 1: Migrate content/media to mozilla:: sync primitives. r=doublec
Bug 645263, part 2: Migrate modules/plugin to mozilla:: sync primitives. sr=bsmedberg
Bug 645263, part 3: Migrate nsComponentManagerImpl to mozilla:: sync primitives. sr=bsmedberg
Bug 645263, part 4: Migrate everything else to mozilla:: sync primitives. r=dbaron
Bug 645263, part 5: Remove nsAutoLock.*. sr=bsmedberg
Bug 645263, part 6: Make editor test be nicer to deadlock detector. r=ehsan
Bug 645263, part 7: Disable tracemalloc backtraces for xpcshell tests. r=dbaron
Bug 646259: Fix nsCacheService to use a CondVar for notifying. r=cjones
Bug 645263, part 0: Count sync primitive ctor/dtors. r=dbaron
Bug 645263, part 1: Migrate content/media to mozilla:: sync primitives. r=doublec
Bug 645263, part 2: Migrate modules/plugin to mozilla:: sync primitives. sr=bsmedberg
Bug 645263, part 3: Migrate nsComponentManagerImpl to mozilla:: sync primitives. sr=bsmedberg
Bug 645263, part 4: Migrate everything else to mozilla:: sync primitives. r=dbaron
Bug 645263, part 5: Remove nsAutoLock.*. sr=bsmedberg
Bug 645263, part 6: Make editor test be nicer to deadlock detector. r=ehsan
Bug 645263, part 7: Disable tracemalloc backtraces for xpcshell tests. r=dbaron
Bug 646259: Fix nsCacheService to use a CondVar for notifying. r=cjones
A force-reload now clears persistent connections to the server related
to the force-reloaded resource. This will allow renogitation of DNS or
server load balancing.
--HG--
extra : rebase_source : 5c55e90ea64039b9cdc0a2d85a51086d2b1d40df
We now store two independent locations for an omni.jar, allowing GRE/XRE and
XUL application to each have their own omni.jar. And since xulrunner setups
are very independent from the XUL applications, we implement support for both
omni.jar and non omni.jar cases in the same runtime, with the side effect of
allowing to switch from one to the other manually without rebuilding the
binaries.
We let the mozilla::Omnijar API handle both cases, so that callers don't need
too much work to support them.
We also make the preferences service load the same set of preferences in all
the various cases (unified vs. separate, omni.jar vs. no omni.jar).
The child process launcher for IPC is modified to pass the base directories
needed for the mozilla::Omnijar API initialization in the child process.
Finally, the startupcache file name canonicalization is modified to separate
APP and GRE resources.
don't send the keep-alive request header. It is redundant to
connection: keep-alive and we don't send the right syntax anyhow.
--HG--
extra : rebase_source : c6d9cb95d2d1cac30bc718884eb3b909db0d6a43
* instead of making (dis)-allow 0.9 a property of connection info,
work off a state machine that engages in the liberal skipping of
junk before response headers only immediately after a no-content
response on the same connection.
* when scanning for response headers in a large amount of junk place a
non infinite limit on that (128KB).. the only known use case for
this is skipping illegal message bodies in 304's and those just
aren't that big.
--HG--
extra : rebase_source : 433fd6aae237d29a9957b1a70cf1e756af5a8af0
Bug 614677 - Connection is reset message appears intermittently
Bug 614950 - Connections stall occasionally after 592284 landed
A couple of follow-on changes to 592284 rolled together to prevent
diff conflicts
1] Set the securitycallback information for unused speculative
connections in the connection manager to be the new cloned connection
rather than the one they originated on. (613977)
2] When adding unused speculative connections to the connection
manager, due so with a short timeout (<= 5 seconds) as some servers
get grumpy if they haven't seen a request in that time. Most will
close the connection, but some will just sit there quietly and RST
things when the connection is used - so if you don't use the
connection quickly don't use it at all. This is probably a L4 load
balancer issue, actually. Mozillazine illustrates the
problem. Connections are made in bursts anyhow, so the reuse optimization is
likely still quite useful. (614677 and 614950)
3] mark every connection in the connection manager persistent
conneciton pool as "reused". This allows the transaction to be
restarted if a RST is recvd upon sending the requests (see #2) - with
the conservative timeout this is now a rare event, but still possible
so recovery is the right thing to do. (614677 and 614950)
4] obtain an nshttpconnection object from the connection manager,
subject to the max connection constraints, at the same time as
starting the backup conneciton. If we defer that until recycling time
the exceeded limits of the SocketService can cause problems for other
connections.
also re-enables the syn retry feature by default.
r+ honzab
that do_test_finished is called even when an exception is thrown.
Also make the test work in the presence of PAC, where the channel changes
between asyncOpen and onStartRequest.
r=bz a=test-only
Losing a TCP SYN requires a long painful (typically 3 second) delay
before being retried. This patch creates a second parallel connection
attempt for any nsHttpConnection which has not become writable before
a timeout occurs.
If you assume .5% packet loss, this converts a full 3 second delay
from a 1 in 200 event into a 1 in 40,000 event.
Whichever connection establishes itself first is used. If another one
has been started and it does connect before the one being used is
closed then the extra one is handed to the connection manager for use
by a different transaction - essentially a persistent connection with
0 previous transactions on it. (Another way to think about is
pre-fetching a 3WHS on a high latency connection).
The pref network.http.connection-retry-timeout controls the amount of
time in ms to wait for success on the initial connection before beginning
the second one. Setting it to 0 disables the parallel connection, the
default is 250.