This was done by:
This was done by applying:
```
diff --git a/python/mozbuild/mozbuild/code-analysis/mach_commands.py b/python/mozbuild/mozbuild/code-analysis/mach_commands.py
index 789affde7bbf..fe33c4c7d4d1 100644
--- a/python/mozbuild/mozbuild/code-analysis/mach_commands.py
+++ b/python/mozbuild/mozbuild/code-analysis/mach_commands.py
@@ -2007,7 +2007,7 @@ class StaticAnalysis(MachCommandBase):
from subprocess import Popen, PIPE, check_output, CalledProcessError
diff_process = Popen(self._get_clang_format_diff_command(commit), stdout=PIPE)
- args = [sys.executable, clang_format_diff, "-p1", "-binary=%s" % clang_format]
+ args = [sys.executable, clang_format_diff, "-p1", "-binary=%s" % clang_format, '-sort-includes']
if not output_file:
args.append("-i")
```
Then running `./mach clang-format -c <commit-hash>`
Then undoing that patch.
Then running check_spidermonkey_style.py --fixup
Then running `./mach clang-format`
I had to fix four things:
* I needed to move <utility> back down in GuardObjects.h because I was hitting
obscure problems with our system include wrappers like this:
0:03.94 /usr/include/stdlib.h:550:14: error: exception specification in declaration does not match previous declaration
0:03.94 extern void *realloc (void *__ptr, size_t __size)
0:03.94 ^
0:03.94 /home/emilio/src/moz/gecko-2/obj-debug/dist/include/malloc_decls.h:53:1: note: previous declaration is here
0:03.94 MALLOC_DECL(realloc, void*, void*, size_t)
0:03.94 ^
0:03.94 /home/emilio/src/moz/gecko-2/obj-debug/dist/include/mozilla/mozalloc.h:22:32: note: expanded from macro 'MALLOC_DECL'
0:03.94 MOZ_MEMORY_API return_type name##_impl(__VA_ARGS__);
0:03.94 ^
0:03.94 <scratch space>:178:1: note: expanded from here
0:03.94 realloc_impl
0:03.94 ^
0:03.94 /home/emilio/src/moz/gecko-2/obj-debug/dist/include/mozmemory_wrap.h:142:41: note: expanded from macro 'realloc_impl'
0:03.94 #define realloc_impl mozmem_malloc_impl(realloc)
Which I really didn't feel like digging into.
* I had to restore the order of TrustOverrideUtils.h and related files in nss
because the .inc files depend on TrustOverrideUtils.h being included earlier.
* I had to add a missing include to RollingNumber.h
* Also had to partially restore include order in JsepSessionImpl.cpp to avoid
some -WError issues due to some static inline functions being defined in a
header but not used in the rest of the compilation unit.
Differential Revision: https://phabricator.services.mozilla.com/D60327
--HG--
extra : moz-landing-system : lando
rg -l 'mozilla/Move.h' | xargs sed -i 's/#include "mozilla\/Move.h"/#include <utility>/g'
Further manual fixups and cleanups to the include order incoming.
Differential Revision: https://phabricator.services.mozilla.com/D60323
--HG--
extra : moz-landing-system : lando
Now mfbt/Move.h is empty except for that excellent comment about move
semantics... Should we put it somewhere else and delete the header as a
follow-up? Or just delete the header and carry on?
Differential Revision: https://phabricator.services.mozilla.com/D60297
--HG--
extra : moz-landing-system : lando
And while at it, initialize mMutationCount in the declaration, to remove an
ugly ifdef block.
Differential Revision: https://phabricator.services.mozilla.com/D57506
--HG--
extra : moz-landing-system : lando
Also add a test for the hash table, because we seem to have zero of them :(
Call clearAndCompact() so that we get minimum capacity on re-growth.
Differential Revision: https://phabricator.services.mozilla.com/D57501
--HG--
extra : moz-landing-system : lando
Patch to use std::move when passing AllocPolicy instances to constructors. This also fixes HashTable move constuction/assignment that previously PodAssigned the whole object including the AllocPolicy base.
Differential Revision: https://phabricator.services.mozilla.com/D36175
As discussed in the previous commit message, HashTableEntry wastes space
for common entry types. This commit reorganizes the internal storage to
store all the hashes packed together, followed by all the entries, which
eliminates the aforementioned waste.
HashTableEntry's data layout currently wastes a fair amount of space due
to ABI-mandated padding. For instance, HashTableEntry<T*> on a 64-bit
platform looks like:
class HashTableEntry {
HashNumber mKeyHash;
// Four bytes of wasted space here to pad mValueData to the correct place.
unsigned char mValueData[sizeof(T*)];
};
This wasted space means that sets of pointers backed by
mozilla::HashTable waste a quarter of their entry storage space. Maps
of pointers to pointers waste a sixth of their entry storage space.
We'd like to fix this by packing all the cached hashes together,
followed by all the hash table entries.
As a first step to laying out the hash table storage differently, we
have to make HashTable not access entries directly, but go through an
abstraction that represents the key and the entry. We call this
abstraction "slots". This commit is similar to the change done for
PLDHashTable previously.
Parts of HashTable still work in terms of Entry; the creation and
destruction of tables was not worth changing here. We'll address that
in the next commit.
We do this to make our lives easier in later patches; this check
guarantees that we don't need padding between the block of cached hash
values and the block of entries immediately after it.
As discussed in the previous commit message, HashTableEntry wastes space
for common entry types. This commit reorganizes the internal storage to
store all the hashes packed together, followed by all the entries, which
eliminates the aforementioned waste.
HashTableEntry's data layout currently wastes a fair amount of space due
to ABI-mandated padding. For instance, HashTableEntry<T*> on a 64-bit
platform looks like:
class HashTableEntry {
HashNumber mKeyHash;
// Four bytes of wasted space here to pad mValueData to the correct place.
unsigned char mValueData[sizeof(T*)];
};
This wasted space means that sets of pointers backed by
mozilla::HashTable waste a quarter of their entry storage space. Maps
of pointers to pointers waste a sixth of their entry storage space.
We'd like to fix this by packing all the cached hashes together,
followed by all the hash table entries.
As a first step to laying out the hash table storage differently, we
have to make HashTable not access entries directly, but go through an
abstraction that represents the key and the entry. We call this
abstraction "slots". This commit is similar to the change done for
PLDHashTable previously.
Parts of HashTable still work in terms of Entry; the creation and
destruction of tables was not worth changing here. We'll address that
in the next commit.
We do this to make our lives easier in later patches; this check
guarantees that we don't need padding between the block of cached hash
values and the block of entries immediately after it.
Currently lookupOrAdd() will allocate if the table has no storage. But it
doesn't report an error if the allocation fails, which can cause problems.
This patch changes things so that lookupOrAdd() doesn't allocate when the table
has no storage. Instead, it returns an AddPtr that is not *valid* (its mTable
is empty) but it is *live*, and can be used in add(), whereupon the allocation
will occur.
The patch also makes Ptr::isValid() and AddPtr::isValid() non-public, because
the valid vs. live distinction is non-obvious and best kept hidden within the
classes.
--HG--
extra : rebase_source : 95d58725d92cc83332e27a61f98fa61185440e26
Because they each only have a single call site, and the "too many items
removed?" test is already encapsulated within rehashIfOverloaded().
--HG--
extra : rebase_source : f6550c483477c2839764e7f657b542c24c82b1e8
infallibleRehashIfOverloaded() actually matches what it does.
Also, the patch removes the `overloaded()` test within the function, because
`rehashIfOverloaded()` has the same test.
--HG--
extra : rebase_source : 2799e623fbdd4b8bb82f8b83166752e02a0f9202
Specifically:
- checkOverloaded -> rehashIfOverloaded
- checkUnderloaded -> shrinkIfUnderloaded
- checkOverRemoved -> rehashIfOverRemoved
Because I've always found that the `check` prefix doesn't clearly explain that
the table might be changed,
And:
- shouldCompressTable -> overRemoved
Because that matches `overloaded` and `underloaded`.
--HG--
extra : rebase_source : 56e9edd012f4a400ac366895d05ea93fb09ec6b3
Because that's a more accurate description of what it does -- it finds free
*and* removed entries.
--HG--
extra : rebase_source : 72b049f44c61a7406d9691a9d9b6405dd696848c
Entry storage allocation now occurs on the first lookupForAdd()/put()/putNew().
This removes the need for init() and initialized(), and matches how
PLDHashTable/nsTHashtable work. It also removes the need for init() functions
in a lot of types that are built on top of mozilla::Hash{Map,Set}.
Pros:
- No need for init() calls and subsequent checks.
- No memory allocated for empty tables, which are not that uncommon.
Cons:
- An extra branch in lookup() and lookupForAdd(), but not in put()/putNew(),
because the existing checkOverloaded() can handle it.
Specifics:
- Construction now can take a length parameter.
- init() is removed. Explicit length-setting, when necessary, now occurs in the
constructors.
- initialized() is removed.
- capacity() now returns zero when the entry storage is absent.
- lookupForAdd() is no longer `const`, because it can instantiate the storage,
which requires modifications.
- lookupForAdd() can now return an invalid AddPtr in two cases:
- old: hashing failure (due to OOM in the hasher)
- new: OOM while instantiating entry storage
The existing failure handling paths for the old case work for the new case.
- clear(), finish(), and clearAndShrink() are replaced by clear(), compact(),
and reserve(). The old compactIfUnderloaded() is also removed.
- Capacity computation code is now in its own functions, bestCapacity() and
hashShift(). setTableSizeLog2() is removed.
- uint32_t is used throughout for capacities, instead of size_t, for
consistency with other similar values.
- changeTableSize() now takes a capacity instead of a deltaLog2, and it can now
handle !mTable.
Measurements:
- Total source code size is reduced by over 900 lines. Also, lots of existing
lines got shorter (i.e. two checks were reduced to one).
- Executable size barely changed, down by 2 KiB on Linux64. The extra branches
are compensated for by the lack of init() calls.
- Speed changed negligibly. The instruction count for Bench_Cpp_MozHash
increased from 2.84 billion to 2.89 billion but any execution time change was
well below noise.
Hash{Map,Set}::putNew() can be used on a table that has had elements removed,
despite some comments to the contrary.
This patch fixes those comments. It also clarifies when putNewInfallible() can
be used.
This patch also removes the !isRemoved() assertion in findFreeEntry(), which is
confusing -- !isLive() would be more precise, but also obvious from the
surrounding code.
MozReview-Commit-ID: q4qwKGBsHx
--HG--
extra : rebase_source : 94331879f9a1484159e030de93bd0ab222b54385
Because it's quite strange, badly named, not that useful, and barely used.
Also remove WeakMap::lookupWithDefault(), which is similar, but not used at
all.
MozReview-Commit-ID: IhIl4hQ73U1
--HG--
extra : rebase_source : 7da237a56391836ca5d056248f18bd5e2d8b1564
Because (a) it's kinda weird, and (b) only used in a single test, where it can
be easily replaced with a vanilla add().
MozReview-Commit-ID: L4RoxFb7yGG
--HG--
extra : rebase_source : 515a5ede5d417686907345ad9069c6a41669dd17
It's identical to mozilla::CStringHasher.
Also fix a comment at the top of HashTable.h about CStringHasher.
--HG--
extra : rebase_source : 92176c4f6ea8041f309764b4ce0271a494853a7b
I bet nobody has used them in years. The use of ad hoc profiling (printfs plus
post-processing with a tool like https://github.com/nnethercote/counts) is
generally a better approach.
Bug 1179657 did likewise for PLDHashTable.
--HG--
extra : rebase_source : 57ab26c62f2f1eff2eec827564a3c3ff0af12f01