A directory-based file copy without checkpoint to abort may take lots
of time to finish. This cause an issue that if firefox is shutting down
and try to close an ongoing update thread, main-thread may be blocked
for a long time.
This patch adds a wrapper for copying an entire directory, within this
wrapper, we use file-based copy and add checkpoints to let update thread
has a chance to abort.
Differential Revision: https://phabricator.services.mozilla.com/D3414
--HG--
extra : moz-landing-system : lando
A directory-based file copy without checkpoint to abort may take lots
of time to finish. This cause an issue that if firefox is shutting down
and try to close an ongoing update thread, main-thread may be blocked
for a long time.
This patch adds a wrapper for copying an entire directory, within this
wrapper, we use file-based copy and add checkpoints to let update thread
has a chance to abort.
Differential Revision: https://phabricator.services.mozilla.com/D3414
--HG--
extra : moz-landing-system : lando
This makes it possible to use different lists for tracking protection
and for the features that rely on tracking annotations.
Differential Revision: https://phabricator.services.mozilla.com/D2484
--HG--
extra : moz-landing-system : lando
URL Classifier has a mUpdateInterrupted flag to abort an ongoing
update in several checkpoints, but we didn't use this while shutting down.
Set this flag in PreShutdown() to avoid url-classifier worker thread or
update thread takes too long to finish an update.
Differential Revision: https://phabricator.services.mozilla.com/D2157
--HG--
extra : moz-landing-system : lando
If Reset() is interleaved with a shutdown, there's no point in
finishing up and we may as well bail early.
MozReview-Commit-ID: Lhm6NfAEgSj
--HG--
extra : rebase_source : e74cc22a36d287a59425079a9f7c4676e7351eba
Explicitly specify the arguments to copy to avoid making a copy of
a dangling `this` pointer.
Convert nsUrlClassifierDBService::mClassifier to a RefPtr since
the update closure might need to continue to access its members
after it's been released by the main thread.
MozReview-Commit-ID: CPio3n9MmsK
--HG--
extra : rebase_source : d7a8b207336209ee37441f3429bc608140e404ae
Replace raw pointers to LookupResult with RefPtrs and eplace the
nsAutoPtr objects + raw pointers params with UniquePtrs.
Also remove unnecessarily paranoid OOM checks when creating single
LookupResult objects since those are pretty small.
MozReview-Commit-ID: G85RNnAat6H
--HG--
extra : rebase_source : a8f6a1ff1e24663d428c8d894cb624e1c67e1bd3
The existing mix of UniquePtr and raw pointers is confusing when
trying to figure out the exact lifetime of these objects.
MozReview-Commit-ID: Br4S7BXEFKs
--HG--
extra : rebase_source : ba35d5c2eeda0741eb4c5491a6caf03f20f3d0ce
This will prevent our holding on to this pointer incorrectly in the
future.
MozReview-Commit-ID: H8ueIOK1qAK
--HG--
extra : rebase_source : 937e9c702c5109b6dfc1684392a1204d8a1edc49
I tried to make TableUpdateArray point to const TableUpdate objects
everywhere but there were two problems:
- HashStore::ApplyUpdate() triggers a few Merge() calls which include
sorting the underlying TableUpdate object first.
- LookupCacheV4::ApplyUpdate() calls TableUpdateV4::NewChecksum() when the
checksum is missing and that sets mChecksum.
MozReview-Commit-ID: LIhJcoxo7e7
--HG--
extra : rebase_source : f6ca4bf42d1ddb897a974a0b19c7185b0b6b93af
Manually keeping tabs on the lifetime of these objects is a pain
and is the likely source of some of our crashes. I suspect we might
also be leaking memory.
This change creates an explicit copy of the main array into the
update thread to avoid using a non-thread-safe shared data
structure. This is a shallow copy. Only the pointers to the
TableUpdates are copied, which means one pointer per list (e.g. 5
in total for google4 in a new profile).
MozReview-Commit-ID: 221d6GkKt0M
--HG--
extra : rebase_source : e1b81f11bb9b41e465571a95845079f455b5868e
HashStore::ApplyUpdate() is a V2-only function and so we can be
explicit about that and remove unnecessary casts.
Add a new update error code for when we fail to cast a TableUpdate
object to the expected protocol version.
MozReview-Commit-ID: 65BBwiZJw6J
--HG--
extra : rebase_source : 3f4bb0f7594d4015e2614ef747526ec5e8168a08
RegenActiveTables() relies on mPrimed being set correctly and so
the V4 lookup cache should behave the same way as the V2 one.
The V2 lookup cache on the other hand was unnecessarily setting
mPrimed to true twice.
MozReview-Commit-ID: LwNdI9DTqZ7
--HG--
extra : rebase_source : ff7d7a051f28867be5c35c1079407bb2bfd3a490
"Include what you use."
--HG--
extra : rebase_source : 2239a380029e0efbc9dd3042459222a67c38d70f
extra : amend_source : 4453c32cc469caa592049167205666997f1a1e7b
extra : histedit_source : a533edd4a4d3d0642b08989e93674661d27baa6a%2C37d27eeef9580381ccc0de8507f60166dabf1730
Passing a value of -1 to nsCString::Truncate() converts that
value to a large integer and leads to an unnecessary 4GB
memory allocation.
MozReview-Commit-ID: Icm5iUsEgA6
--HG--
extra : rebase_source : 7c7a52762ba86b1daadf60b0f362c5740d335781
This fixes usages of `Find`, `RFind` and the equality operator that kind of
work right now but will break with the proper type checking of a templatized
version of the string classes.
For `Find` and `RFind` it appears that `nsCString::(R)Find("foo", 0)` calls
were being coerced to the `Find(char*, bool, int, int)` versions. The intent was
probably to just start searching from position zero.
For the equality operator, the type of nullptr is nullptr_t rather than
char(16_t)* so we'd need to add an operator overload that takes nullptr_t. In
this case just using `IsVoid` is probably more appropriate.
--HG--
extra : rebase_source : 50f78519084012ca669da0a211c489520c11d6b6
We have a minimum requirement of VS 2015 for Windows builds, which supports
the z length modifier for format specifiers. So we don't need SizePrintfMacros.h
any more, and can just use %zu and friends directly everywhere.
MozReview-Commit-ID: 6s78RvPFMzv
--HG--
extra : rebase_source : 009ea39eb4dac1c927aa03e4f97d8ab673de8a0e
After adopting the new thread model for safebrowsing, we will create a new
lookup cache for update so we can still check lookup cache at the same time.
Prefix set, completions will be generated when we open the new lookup cache
but it won't include cache, so we will loss cache after that.
This patch will copy cache data from old lookup cache to new lookup
cache while update.
MozReview-Commit-ID: L0WpiHOGIGm
--HG--
extra : rebase_source : ebaf249535561b87a983a3f24dacb8fbab7c096e
Functions and Preference related to mTableFreshness and gFreshnessGuarantee
could be removed since we will no longer require them.
MozReview-Commit-ID: IAa0UHLSQ56
--HG--
extra : rebase_source : daa959340c92cfcb751fe8d0548d30762db3783e
In this patch, we will make Safebrowsing V2 caching use the same algorithm as V4.
So we remove "mMissCache" for negative caching and TableFresness check for
positive caching.
But Safebrowsing V2 doesn't contain negative/positive cache duration information in
gethash response. So we hard-code a fixed value, 15 minutes, as cache duration.
In this way, we can sync the mechanism we handle caching for V2 and V4.
An extra effort for V2 here is that we need to manually record prefixes misses
because we won't get any response for those prefixes(implemented in
nsUrlClassifierLookupCallback::CacheMisses).
--HG--
extra : rebase_source : 0fb7938464ad24a01a51493ecb1b5c0197c16123
mTableRefreshness, a non-thread-safe object, might be accessed on worker thread
and update thread cocurrently. To solve this issue, on update thread we only
insert data to mNewTableRefreshness and merge to mTableRefreshness on
the worker thread later.
MozReview-Commit-ID: 9WgoeYfWVfK
--HG--
extra : rebase_source : b7a8b4cd9d0fb1471cb81ee239f8343ff9a7b38a
LookupCacheV4::Has implements safebrowsing v4 caching logic.
1. Check if fullhash match any prefix in local database:
- If not, the URL is safe.
2. Check if prefix is in the cache(prefix is always the first 4-byte of
the fullhash, Bug 1323953):
- If not, send fullhash request
3. Check if fullhash is in the positive cache:
- If fullhash is found and it is not expired, the URL is not safe.
- If fullhash is found and it is expired, send fullhash request.
4. If fullhash is not found, check negative cache expired time:
- If negative cache time is not expired, the URL is safe.
- If negative cache time is expired, send fullhash request.
MozReview-Commit-ID: GRX7CP8ig49
--HG--
extra : rebase_source : 6d3ae8929b11731584810c44d46a1526f7d83e12
This patch fixes that Classifier::ActiveTables doesn't return v4 tables.
Classifier::mActiveTablesCache is generated by scanning safebrowsing directory.
We use Classifier::ScanStoreDir to do the work, but it will ignore subdirectory.
Since v4 tables are stored in subdirectory 'google4', mActiveTablesCache doesn't
include v4 tables.
Fix this issue by checking subdirectory recursively in ScanStoreDir.
MozReview-Commit-ID: I6pa6e4bFND
--HG--
extra : rebase_source : 49d19101bec780e21db668b2daff3c46bbdb3fd2
LookupCacheV4::Has implements safebrowsing v4 caching logic.
1. Check if fullhash match any prefix in local database:
- If not, the URL is safe.
2. Check if prefix is in the cache(prefix is always the first 4-byte of
the fullhash, Bug 1323953):
- If not, send fullhash request
3. Check if fullhash is in the positive cache:
- If fullhash is found and it is not expired, the URL is not safe.
- If fullhash is found and it is expired, send fullhash request.
4. If fullhash is not found, check negative cache expired time:
- If negative cache time is not expired, the URL is safe.
- If negative cache time is expired, send fullhash request.
MozReview-Commit-ID: GRX7CP8ig49
--HG--
extra : rebase_source : bcb5fa7aa2b7b116d862e3382447611424603c2d
This patch fixes that Classifier::ActiveTables doesn't return v4 tables.
Classifier::mActiveTablesCache is generated by scanning safebrowsing directory.
We use Classifier::ScanStoreDir to do the work, but it will ignore subdirectory.
Since v4 tables are stored in subdirectory 'google4', mActiveTablesCache doesn't
include v4 tables.
Fix this issue by checking subdirectory recursively in ScanStoreDir.
MozReview-Commit-ID: I6pa6e4bFND
--HG--
extra : rebase_source : 49d19101bec780e21db668b2daff3c46bbdb3fd2
A new function Classifier::AsyncApplyUpdates() is implemented for async update.
Besides, all public Classifier interfaces become "worker thread only" and
we remove DBServiceWorker::ApplyUpdatesBackground/Foreground.
In DBServiceWorker::FinishUpdate, instead of calling Classifier::ApplyUpdates,
we call Classifier::AsyncApplyUpdates and install a callback for notifying
the update observer when update is finished. The callback will occur on the
caller thread (i.e. worker thread.)
As for the shutdown issue, when the main thread is notified to shut down,
we at first *synchronously* dispatch an event to the worker thread to
shut down the update thread. After getting synchronized with all other
threads, we send last two events "CancelUpdate" and "CloseDb" to notify
dangling update (i.e. BeginUpdate is called but FinishUpdate isn't)
and do cleanup work.
MozReview-Commit-ID: DXZvA2eFKlc
--HG--
extra : rebase_source : cd2e27a6b679d2c96e769854d1582ed2dcda12bb