This backs out all work from bug 1627075 as well as all of its
descendents. There were a few conflicts when backing this out but
overall it was pretty clean, so I would say it's a fairly mild
level of risk. Historically Nathan Froyd has reviewed these patches,
but he is no longer at Mozilla, and no one else is particularly
familiar with the code, so I am passing this off to RyanVM who has
at least been familiar with the history of the bug.
Differential Revision: https://phabricator.services.mozilla.com/D90096
So far I have not found a way for this issue to cause bug 1656261, but
it's definitely plausible that there's something there. If we hit a
EXCEPTION_IN_PAGE_ERROR in GetBuffer, then we will exit the
DecompressEntry method with the entry's mData being set. In any
subsequent calls to GetBuffer for that key, we will return the
uninitialized memory. If we can hit this path with chrome.manifest,
then we would end up reading garbage from it, and could conclude that
xul.css is missing.
Whether this is the culprit or not, this is a problem that needs
addressing.
Differential Revision: https://phabricator.services.mozilla.com/D87663
To be honest, it's still a mystery why we observed a regression in
sessionrestore_no_auto_restore in bug 1658732. The regression won't reproduce
on profiled runs, and the bad recordings happen before the supposedly offending
code ever actually runs. It feels most likely that it is a more or less random
confluence of factors causing a regression; however, 33% is too large of a
number to ignore.
The changes in this patch do not seem to yield the same regression, and they
are arguably more correct anyway. Instead of simply turning off the cache after
startup is finished, we simply avoid blocking waiting for the write from inside
GetBuffer. This way, if the write is not getting in the way of GetBuffer, we
can still benefit from a cached version of whatever it is we're looking for.
Differential Revision: https://phabricator.services.mozilla.com/D87221
I don't think this will fully resolve the shutdown hangs associated
with this bug. The underlying problem is that writing the startup
cache during shutdown can take a long time. This seems like A Bad Thing,
and I have a mild leaning toward the thought that we shouldn't write the
startup cache at all during shutdown. However, I do think that trading
shutdown time regressions for startup time wins is probably good, so
I don't feel super comfortable making that change.
However, this change at least fixes the problem where we A) block the main
thread waiting for the write to finish, and B) are more likely to hit a
shutdown hang, since we concentrate all of the hang time in one phase,
rather than it being spread across the shutdown timeline. This is likely
the major change that the regressing bug introduced.
The unfortunate consequence is that if we try to GetBuffer during shutdown,
we will now read out of the omnijar. However, it should overall be better
since we'll be waiting on a (hopefully) small number of small reads rather
than a large write.
Differential Revision: https://phabricator.services.mozilla.com/D86054
I don't think this will fully resolve the shutdown hangs associated
with this bug. The underlying problem is that writing the startup
cache during shutdown can take a long time. This seems like A Bad Thing,
and I have a mild leaning toward the thought that we shouldn't write the
startup cache at all during shutdown. However, I do think that trading
shutdown time regressions for startup time wins is probably good, so
I don't feel super comfortable making that change.
However, this change at least fixes the problem where we A) block the main
thread waiting for the write to finish, and B) are more likely to hit a
shutdown hang, since we concentrate all of the hang time in one phase,
rather than it being spread across the shutdown timeline. This is likely
the major change that the regressing bug introduced.
The unfortunate consequence is that if we try to GetBuffer during shutdown,
we will now read out of the omnijar. However, it should overall be better
since we'll be waiting on a (hopefully) small number of small reads rather
than a large write.
Differential Revision: https://phabricator.services.mozilla.com/D86054
Prior to this patch, we were sending a boolean from InitContentChild (which
creates our StartupCache IPC actors) indicating whether we wanted to collect
new entries from a given process or not. This was so that we wouldn't accept
PutBuffer requests in these processes, since collecting them in one process
would be enough, and we don't want to waste memory. However, we actually
want the cache to be available before we can even get that IPC constructor
to the child process, so there's a window where we accept new entries
no matter what. This patch changes this by sending a boolean argument via
the command line indicating that we want to disable the Startupcache in this
process entirely. We send this when we didn't load a StartupCache off disk,
as this should be the only circumstance in which we're actually collecting
a substantial number of entries in content processes.
Differential Revision: https://phabricator.services.mozilla.com/D83400
On first run, when the cache is empty, we collect a lot of freshly allocated
buffers for StartupCache entries. These entries are generally not requested
again after being put in the StartupCache, so they are generally safe to free
(as ownership of them was given to us by contract in PutBuffer.) This tends
to free up >10MB.
Differential Revision: https://phabricator.services.mozilla.com/D83399
Keys can take up a nontrivial chunk of memory. This change ensures that the
memory cost is not per-process. In order to do this we had to switch from
nsCStrings to MaybeOwnedCharPtrs pointing to null-terminated char sequences.
This meant fiddling around with the code that loads / stores to disk and
making it a little bit uglier, rather than adding another method to the
IOBuffers classes to support our weird maybe-shared string type.
Differential Revision: https://phabricator.services.mozilla.com/D83398
The overall goal of this patch is to make the StartupCache accessible anywhere.
There's two main pieces to that equation:
1. Allowing it to be accessed off main thread, which means modifying the
mutex usage to ensure that all data accessed from non-main threads is
protected.
2. Allowing it to be accessed out of the chrome process, which means passing
a handle to a shared cache buffer down to child processes.
Number 1 is somewhat fiddly, but it's all generally straightforward work. I'll
hope that the comments and the code are sufficient to explain what's going on
there.
Number 2 has some decisions to be made:
- The first decision was to pass a handle to a frozen chunk of memory down to
all child processes, rather than passing a handle to an actual file. There's
two reasons for this: 1) since we want to compress the underlying file on
disk, giving that file to child processes would mean they have to decompress
it themselves, eating CPU time. 2) since they would have to decompress it
themselves, they would have to allocate the memory for the decompressed
buffers, meaning they cannot all simply share one big decompressed buffer.
- The drawback of this decision is that we have to load and decompress the
buffer up front, before we spawn any child processes. We attempt to
mitigate this by keeping track of all the entries that child processes
access, and only including those in the frozen decompressed shared buffer.
- We base our implementation of this approach off of the shared preferences
implementation. Hopefully I got all of the pieces to fit together
correctly. They seem to work in local testing and on try, but I think
they require a set of experienced eyes looking carefully at them.
- Another decision was whether to send the handles to the buffers over IPC or
via command line. We went with the command line approach, because the startup
cache would need to be accessed very early on in order to ensure we do not
read from any omnijars, and we could not make that work via IPC.
- Unfortunately this means adding another hard-coded FD, similar to
kPrefMapFileDescriptor. It seems like at the very least we need to rope all
of these together into one place, but I think that should be filed as a
follow-up?
Lastly, because this patch is a bit of a monster to review - first, thank you
for looking at it, and second, the reason we're invested in this is because we
saw a >10% improvement in cold startup times on reference hardware, with a p
value less than 0.01. It's still not abundantly clear how reference hardware
numbers translate to numbers on release, and they certainly don't translate
well to Nightly numbers, but it's enough to convince me that it's worth some
effort.
Depends on D78584
Differential Revision: https://phabricator.services.mozilla.com/D77635
We need to be able to init StartupCache before the Omnijar in order to cache
all of the Omnijar contents we access. This patch implements that.
Depends on D77632
Differential Revision: https://phabricator.services.mozilla.com/D77633
The overall goal of this patch is to make the StartupCache accessible anywhere.
There's two main pieces to that equation:
1. Allowing it to be accessed off main thread, which means modifying the
mutex usage to ensure that all data accessed from non-main threads is
protected.
2. Allowing it to be accessed out of the chrome process, which means passing
a handle to a shared cache buffer down to child processes.
Number 1 is somewhat fiddly, but it's all generally straightforward work. I'll
hope that the comments and the code are sufficient to explain what's going on
there.
Number 2 has some decisions to be made:
- The first decision was to pass a handle to a frozen chunk of memory down to
all child processes, rather than passing a handle to an actual file. There's
two reasons for this: 1) since we want to compress the underlying file on
disk, giving that file to child processes would mean they have to decompress
it themselves, eating CPU time. 2) since they would have to decompress it
themselves, they would have to allocate the memory for the decompressed
buffers, meaning they cannot all simply share one big decompressed buffer.
- The drawback of this decision is that we have to load and decompress the
buffer up front, before we spawn any child processes. We attempt to
mitigate this by keeping track of all the entries that child processes
access, and only including those in the frozen decompressed shared buffer.
- We base our implementation of this approach off of the shared preferences
implementation. Hopefully I got all of the pieces to fit together
correctly. They seem to work in local testing and on try, but I think
they require a set of experienced eyes looking carefully at them.
- Another decision was whether to send the handles to the buffers over IPC or
via command line. We went with the command line approach, because the startup
cache would need to be accessed very early on in order to ensure we do not
read from any omnijars, and we could not make that work via IPC.
- Unfortunately this means adding another hard-coded FD, similar to
kPrefMapFileDescriptor. It seems like at the very least we need to rope all
of these together into one place, but I think that should be filed as a
follow-up?
Lastly, because this patch is a bit of a monster to review - first, thank you
for looking at it, and second, the reason we're invested in this is because we
saw a >10% improvement in cold startup times on reference hardware, with a p
value less than 0.01. It's still not abundantly clear how reference hardware
numbers translate to numbers on release, and they certainly don't translate
well to Nightly numbers, but it's enough to convince me that it's worth some
effort.
Depends on D78584
Differential Revision: https://phabricator.services.mozilla.com/D77635
We need to be able to init StartupCache before the Omnijar in order to cache
all of the Omnijar contents we access. This patch implements that.
Depends on D77632
Differential Revision: https://phabricator.services.mozilla.com/D77633
The overall goal of this patch is to make the StartupCache accessible anywhere.
There's two main pieces to that equation:
1. Allowing it to be accessed off main thread, which means modifying the
mutex usage to ensure that all data accessed from non-main threads is
protected.
2. Allowing it to be accessed out of the chrome process, which means passing
a handle to a shared cache buffer down to child processes.
Number 1 is somewhat fiddly, but it's all generally straightforward work. I'll
hope that the comments and the code are sufficient to explain what's going on
there.
Number 2 has some decisions to be made:
- The first decision was to pass a handle to a frozen chunk of memory down to
all child processes, rather than passing a handle to an actual file. There's
two reasons for this: 1) since we want to compress the underlying file on
disk, giving that file to child processes would mean they have to decompress
it themselves, eating CPU time. 2) since they would have to decompress it
themselves, they would have to allocate the memory for the decompressed
buffers, meaning they cannot all simply share one big decompressed buffer.
- The drawback of this decision is that we have to load and decompress the
buffer up front, before we spawn any child processes. We attempt to
mitigate this by keeping track of all the entries that child processes
access, and only including those in the frozen decompressed shared buffer.
- We base our implementation of this approach off of the shared preferences
implementation. Hopefully I got all of the pieces to fit together
correctly. They seem to work in local testing and on try, but I think
they require a set of experienced eyes looking carefully at them.
- Another decision was whether to send the handles to the buffers over IPC or
via command line. We went with the command line approach, because the startup
cache would need to be accessed very early on in order to ensure we do not
read from any omnijars, and we could not make that work via IPC.
- Unfortunately this means adding another hard-coded FD, similar to
kPrefMapFileDescriptor. It seems like at the very least we need to rope all
of these together into one place, but I think that should be filed as a
follow-up?
Lastly, because this patch is a bit of a monster to review - first, thank you
for looking at it, and second, the reason we're invested in this is because we
saw a >10% improvement in cold startup times on reference hardware, with a p
value less than 0.01. It's still not abundantly clear how reference hardware
numbers translate to numbers on release, and they certainly don't translate
well to Nightly numbers, but it's enough to convince me that it's worth some
effort.
Depends on D78584
Differential Revision: https://phabricator.services.mozilla.com/D77635
We need to be able to init StartupCache before the Omnijar in order to cache
all of the Omnijar contents we access. This patch implements that.
Depends on D77632
Differential Revision: https://phabricator.services.mozilla.com/D77633
The overall goal of this patch is to make the StartupCache accessible anywhere.
There's two main pieces to that equation:
1. Allowing it to be accessed off main thread, which means modifying the
mutex usage to ensure that all data accessed from non-main threads is
protected.
2. Allowing it to be accessed out of the chrome process, which means passing
a handle to a shared cache buffer down to child processes.
Number 1 is somewhat fiddly, but it's all generally straightforward work. I'll
hope that the comments and the code are sufficient to explain what's going on
there.
Number 2 has some decisions to be made:
- The first decision was to pass a handle to a frozen chunk of memory down to
all child processes, rather than passing a handle to an actual file. There's
two reasons for this: 1) since we want to compress the underlying file on
disk, giving that file to child processes would mean they have to decompress
it themselves, eating CPU time. 2) since they would have to decompress it
themselves, they would have to allocate the memory for the decompressed
buffers, meaning they cannot all simply share one big decompressed buffer.
- The drawback of this decision is that we have to load and decompress the
buffer up front, before we spawn any child processes. We attempt to
mitigate this by keeping track of all the entries that child processes
access, and only including those in the frozen decompressed shared buffer.
- We base our implementation of this approach off of the shared preferences
implementation. Hopefully I got all of the pieces to fit together
correctly. They seem to work in local testing and on try, but I think
they require a set of experienced eyes looking carefully at them.
- Another decision was whether to send the handles to the buffers over IPC or
via command line. We went with the command line approach, because the startup
cache would need to be accessed very early on in order to ensure we do not
read from any omnijars, and we could not make that work via IPC.
- Unfortunately this means adding another hard-coded FD, similar to
kPrefMapFileDescriptor. It seems like at the very least we need to rope all
of these together into one place, but I think that should be filed as a
follow-up?
Lastly, because this patch is a bit of a monster to review - first, thank you
for looking at it, and second, the reason we're invested in this is because we
saw a >10% improvement in cold startup times on reference hardware, with a p
value less than 0.01. It's still not abundantly clear how reference hardware
numbers translate to numbers on release, and they certainly don't translate
well to Nightly numbers, but it's enough to convince me that it's worth some
effort.
Depends on D78584
Differential Revision: https://phabricator.services.mozilla.com/D77635
We need to be able to init StartupCache before the Omnijar in order to cache
all of the Omnijar contents we access. This patch implements that.
Depends on D77632
Differential Revision: https://phabricator.services.mozilla.com/D77633
The overall goal of this patch is to make the StartupCache accessible anywhere.
There's two main pieces to that equation:
1. Allowing it to be accessed off main thread, which means modifying the
mutex usage to ensure that all data accessed from non-main threads is
protected.
2. Allowing it to be accessed out of the chrome process, which means passing
a handle to a shared cache buffer down to child processes.
Number 1 is somewhat fiddly, but it's all generally straightforward work. I'll
hope that the comments and the code are sufficient to explain what's going on
there.
Number 2 has some decisions to be made:
- The first decision was to pass a handle to a frozen chunk of memory down to
all child processes, rather than passing a handle to an actual file. There's
two reasons for this: 1) since we want to compress the underlying file on
disk, giving that file to child processes would mean they have to decompress
it themselves, eating CPU time. 2) since they would have to decompress it
themselves, they would have to allocate the memory for the decompressed
buffers, meaning they cannot all simply share one big decompressed buffer.
- The drawback of this decision is that we have to load and decompress the
buffer up front, before we spawn any child processes. We attempt to
mitigate this by keeping track of all the entries that child processes
access, and only including those in the frozen decompressed shared buffer.
- We base our implementation of this approach off of the shared preferences
implementation. Hopefully I got all of the pieces to fit together
correctly. They seem to work in local testing and on try, but I think
they require a set of experienced eyes looking carefully at them.
- Another decision was whether to send the handles to the buffers over IPC or
via command line. We went with the command line approach, because the startup
cache would need to be accessed very early on in order to ensure we do not
read from any omnijars, and we could not make that work via IPC.
- Unfortunately this means adding another hard-coded FD, similar to
kPrefMapFileDescriptor. It seems like at the very least we need to rope all
of these together into one place, but I think that should be filed as a
follow-up?
Lastly, because this patch is a bit of a monster to review - first, thank you
for looking at it, and second, the reason we're invested in this is because we
saw a >10% improvement in cold startup times on reference hardware, with a p
value less than 0.01. It's still not abundantly clear how reference hardware
numbers translate to numbers on release, and they certainly don't translate
well to Nightly numbers, but it's enough to convince me that it's worth some
effort.
Depends on D78584
Differential Revision: https://phabricator.services.mozilla.com/D77635
We need to be able to init StartupCache before the Omnijar in order to cache
all of the Omnijar contents we access. This patch implements that.
Depends on D77632
Differential Revision: https://phabricator.services.mozilla.com/D77633
Prior to this patch, the startupcache created its own mWriteThread off which it
wrote to disk. It's initialized by MaybeSpawnWriteThread, which got called
at shutdown, to do the shutdown write if there was any reason to do so, and
from a timer that is re-initialized after every addition to the startup cache,
to run 60s after the last change to the cache.
It then joined that write thread on the main thread (in other words, blocks
on that off-main-thread write completing from the main thread) when:
- xpcom-shutdown fired
- the startupcache itself gets destroyed
- someone calls any of:
* HasEntry
* GetBuffer
* PutBuffer
* InvalidateCache
This patch removes the separate write thread, and instead dispatches a task to
the background task queue, indicating it can block. The task is started in
the same circumstances where we previously used to write (timer from the last
PutBuffer call, and shutdown if necessary).
To ensure it cannot be trying to use the data it writes out (mTable) from
the other thread while that data changes on the main thread, we use a mutex.
The task locks the mutex before starting, and unlocks when finished.
Enumerating the cases that we used to block on joining the thread:
In terms of application shutdown, we expect the background task queue to
either finish the write task, or fail to run it if it hasn't started it yet.
In the FastStartup case, we check if a write was necessary; if so, we
attempt to gain the lock without waiting. If we're successful, the write has
not yet started, and we instead run the write on the main thread. Otherwise,
we retry gaining the lock, blocking this time, thus guaranteeing the
off-the-main-thread write completes.
The task keeps a reference to the startupcache object, so it cannot be
destroyed while the task is pending.
Because the write does not modify `mTable`, and neither does `HasEntry`,
we do not need to do anything there.
In the `GetBuffer` case, we do not modify the table unless we have to read
the entry off disk (memmapped into `mCacheData`). This can only happen if
`mCacheData.initialized()` returns true, and we specifically call
`mCacheData.reset()` before firing off the write task to avoid this.
`mCacheData` is only re-initialized if someone calls `LoadArchive()`,
which can only happen from `Init()` (which is guaranteed not to run
again because this is a singleton), or `InvalidateCache()`, where we lock
the mutex (see below). So this is safe - but we assert on the lock to try
and avoid people breaking this chain of assumptions in the future.
When `PutBuffer` is called, we try to lock the mutex - but if locking fails
(ie the background thread is writing), we simply fail to store the entry
in the startupcache. In practice, this should be rare - it'd happen if
new calls to PutBuffer happen while writing during shutdown (when really,
we don't care) or when it's been 60 seconds since the last PutBuffer so
we started writing the startupcache.
When InvalidateCache is called, we lock the mutex - we shouldn't try to
write while invalidating, or invalidate while writing. This may be slow,
but in practice nothing should call `InvalidateCache` except developer
restarts or the `-purgecaches` commandline flag, so it shouldn't
matter a great deal.
Differential Revision: https://phabricator.services.mozilla.com/D70413
--HG--
extra : moz-landing-system : lando
Adds a new startup cache info service to expose some of the internal
state of the startup cache.
Differential Revision: https://phabricator.services.mozilla.com/D69298
--HG--
extra : moz-landing-system : lando
Adds a new startup cache info service to expose some of the internal
state of the startup cache.
Differential Revision: https://phabricator.services.mozilla.com/D69298
--HG--
extra : moz-landing-system : lando
The exact circumstances of how this is showing up in the wild aren't
clear - there seem to be a couple of ways we can get here. However it
all revolves around early shutdowns (i.e., from the select profile popup)
- before the StartupCache is ever initialized. In any case, the solution
shouldn't change based on the exact circumstances - if we don't have a
StartupCache, there's no need to write one. Also, don't bother lazy
initializing it if it doesn't exist yet.
Differential Revision: https://phabricator.services.mozilla.com/D63208
--HG--
extra : moz-landing-system : lando
Reordering the mWrittenOnce check should be sufficient to eliminate
the data race; however, I made mWrittenOnce an atomic just to reduce
the fragility of this since it is intended to be written from and
read to on multiple threads.
Differential Revision: https://phabricator.services.mozilla.com/D62949
--HG--
extra : moz-landing-system : lando
The initial thought for getting the StartupCache out of the shutdown
path was to simply not write it during shutdown, as it should write
60 seconds after startup, and the theory was that if the user shut
down within the first 60 seconds of use, they were likely updating or
something and it shouldn't matter. However, considering how many of
our users only ever open one tab, I think it's rather likely that
users are starting up firefox to go to a web site, then closing it
when done with that website, and then maybe opening up a new instance
in order to go to a different website. Accordingly it still makes
sense to continue writing it. However, we may as well leverage a
background thread for this and get it kicked off earlier during
shutdown, so we don't sit there blocking in the destructor late
during shutdown.
Differential Revision: https://phabricator.services.mozilla.com/D62294
--HG--
extra : source : 7b7b147b6955cee07e0c115993446bfbd59cf7e2
extra : histedit_source : 6990122d6b1ac4939b0e4b0a5e452183fb981e19
The initial thought for getting the StartupCache out of the shutdown
path was to simply not write it during shutdown, as it should write
60 seconds after startup, and the theory was that if the user shut
down within the first 60 seconds of use, they were likely updating or
something and it shouldn't matter. However, considering how many of
our users only ever open one tab, I think it's rather likely that
users are starting up firefox to go to a web site, then closing it
when done with that website, and then maybe opening up a new instance
in order to go to a different website. Accordingly it still makes
sense to continue writing it. However, we may as well leverage a
background thread for this and get it kicked off earlier during
shutdown, so we don't sit there blocking in the destructor late
during shutdown.
Differential Revision: https://phabricator.services.mozilla.com/D62294
--HG--
extra : moz-landing-system : lando
The initial thought for getting the StartupCache out of the shutdown
path was to simply not write it during shutdown, as it should write
60 seconds after startup, and the theory was that if the user shut
down within the first 60 seconds of use, they were likely updating or
something and it shouldn't matter. However, considering how many of
our users only ever open one tab, I think it's rather likely that
users are starting up firefox to go to a web site, then closing it
when done with that website, and then maybe opening up a new instance
in order to go to a different website. Accordingly it still makes
sense to continue writing it. However, we may as well leverage a
background thread for this and get it kicked off earlier during
shutdown, so we don't sit there blocking in the destructor late
during shutdown.
Differential Revision: https://phabricator.services.mozilla.com/D62294
--HG--
extra : moz-landing-system : lando
The inclusions were removed with the following very crude script and the
resulting breakage was fixed up by hand. The manual fixups did either
revert the changes done by the script, replace a generic header with a more
specific one or replace a header with a forward declaration.
find . -name "*.idl" | grep -v web-platform | grep -v third_party | while read path; do
interfaces=$(grep "^\(class\|interface\).*:.*" "$path" | cut -d' ' -f2)
if [ -n "$interfaces" ]; then
if [[ "$interfaces" == *$'\n'* ]]; then
regexp="\("
for i in $interfaces; do regexp="$regexp$i\|"; done
regexp="${regexp%%\\\|}\)"
else
regexp="$interfaces"
fi
interface=$(basename "$path")
rg -l "#include.*${interface%%.idl}.h" . | while read path2; do
hits=$(grep -v "#include.*${interface%%.idl}.h" "$path2" | grep -c "$regexp" )
if [ $hits -eq 0 ]; then
echo "Removing ${interface} from ${path2}"
grep -v "#include.*${interface%%.idl}.h" "$path2" > "$path2".tmp
mv -f "$path2".tmp "$path2"
fi
done
fi
done
Differential Revision: https://phabricator.services.mozilla.com/D55444
--HG--
extra : moz-landing-system : lando
Does what it says on the tin. Once we have a central scheduling
system this should likely just consume that.
Differential Revision: https://phabricator.services.mozilla.com/D35454
--HG--
extra : moz-landing-system : lando
The first run loads more things into the StartupCache than are
used on the second and subsequent runs. This just ensures that if
the StartupCache diverges too far from its actual use that we will
rebuild it.
Differential Revision: https://phabricator.services.mozilla.com/D34654
--HG--
extra : moz-landing-system : lando
I am not aware of anything that depends on StartupCache being a
zip file, and since I want to use lz4 compression because inflate
is showing up quite a lot in profiles, it's simplest to just use
a custom format. This loosely mimicks the ScriptPreloader code,
with a few diversions:
- Obviously the contents of the cache are compressed. I used lz4
for this as I hit the same file size as deflate at a compression
level of 1, which is what the StartupCache was using previously,
while decompressing an order of magnitude faster. Seemed like
the most conservative change to make. I think it's worth
investigating what the impact of slower algs with higher ratios
would be, but for right now I settled on this. We'd probably
want to look at zstd next.
- I use streaming compression for this via lz4frame. This is not
strictly necessary, but has the benefit of not requiring as
much memory for large buffers, as well as giving us a built-in
checksum, rather than relying on the much slower CRC that we
were doing with the zip-based approach.
- I coded the serialization of the headers inline, since I had to
jump back to add the offset and compressed size, which would
make the nice Code(...) method for the ScriptPreloader stuff
rather more complex. Open to cleaner solutions, but moving it
out just felt like extra hoops for the reader to jump through
to understand without the benefit of being more concise.
Differential Revision: https://phabricator.services.mozilla.com/D34652
--HG--
extra : moz-landing-system : lando
This will not behave exactly the same if we had previously written bad
data for the entry that would fail to decompress. I imagine this is rare
enough, and the consequences are not severe enough, that this should be
fine.
Differential Revision: https://phabricator.services.mozilla.com/D30643
--HG--
extra : moz-landing-system : lando