Граф коммитов

52 Коммитов

Автор SHA1 Сообщение Дата
Andrew McCreight 4babb2b5ab Bug 1609815 - Remove Web Replay C++ implementation. r=jgilbert,jandem,gbrown
Patch by bhackett and jlaster. Also reviewed by mccr8.

Differential Revision: https://phabricator.services.mozilla.com/D60197

--HG--
extra : moz-landing-system : lando
2020-02-27 17:39:15 +00:00
Ciure Andrei 00dd87f6f4 Backed out changeset d407a28318e6 (bug 1609815) for causing windows ming bustages CLOSED TREE
--HG--
extra : histedit_source : b2c748e31e0f6ba8fcf9960a336e0bbd361b07e6
2020-02-27 07:05:19 +02:00
Andrew McCreight b197e1f783 Bug 1609815 - Remove Web Replay C++ implementation. r=jgilbert,jandem,gbrown
Patch by bhackett and jlaster. Also reviewed by mccr8.

Differential Revision: https://phabricator.services.mozilla.com/D60197

--HG--
extra : moz-landing-system : lando
2020-02-27 04:43:48 +00:00
Sylvestre Ledru 8d2f0d1b1f Bug 1519636 - Reformat recent changes to the Google coding style r=Ehsan
# ignore-this-changeset

Differential Revision: https://phabricator.services.mozilla.com/D54686

--HG--
extra : moz-landing-system : lando
2019-11-26 14:35:02 +00:00
Emilio Cobos Álvarez 30240152f3 Bug 1591132 - Make it easy to switch on and off these assertions in different build configurations. r=froydnj
Put them behind a MOZ_HASH_TABLE_CHECKS_ENABLED define, which right now is only
defined in DEBUG builds, preserving behavior.

MakeImmutable becomes an empty inline function when disabled, which should be
zero-cost.

Differential Revision: https://phabricator.services.mozilla.com/D50493

--HG--
extra : moz-landing-system : lando
2019-10-28 23:27:30 +00:00
Emilio Cobos Álvarez 9d4e6eee54 Bug 1591132 - Remove some useless ifdefs. r=froydnj
Default constructors of members run if not specified there, and these ifdefs
are ugly.

Differential Revision: https://phabricator.services.mozilla.com/D50492

--HG--
extra : moz-landing-system : lando
2019-10-28 23:27:28 +00:00
Simon Giesecke 26a1799ab0 Bug 1575479 - Add support for STL iterators and range-based for to nsBaseHashtable. r=froydnj
Differential Revision: https://phabricator.services.mozilla.com/D44982

--HG--
extra : moz-landing-system : lando
2019-09-10 12:08:02 +00:00
Coroiu Cristina 77681b7832 Backed out 2 changesets (bug 1575479) for build bustage at build/src/gfx/thebes/gfxFT2Fonts.h on a CLOSED TREE
Backed out changeset dcbc7c69fb64 (bug 1575479)
Backed out changeset 6d54a8115393 (bug 1575479)
2019-09-10 15:01:10 +03:00
Simon Giesecke 1b5be2b03f Bug 1575479 - Add support for STL iterators and range-based for to nsBaseHashtable. r=froydnj
Differential Revision: https://phabricator.services.mozilla.com/D44982

--HG--
extra : moz-landing-system : lando
2019-09-09 14:27:24 +00:00
Sylvestre Ledru 03c8e8c2dd Bug 1519636 - clang-format-8: Reformat recent changes to the Google coding style r=Ehsan
clang-format-8 upstream had some improvements wrt macros
See: https://reviews.llvm.org/D33440
This is why the diff is bigger than usual

# ignore-this-changeset

Differential Revision: https://phabricator.services.mozilla.com/D26098

--HG--
extra : moz-landing-system : lando
2019-04-05 21:41:42 +00:00
Csoregi Natalia ba58e936bd Backed out changeset 4ad80127f89f (bug 1519636) for bustage on MarkupMap.h and nsAccessibilityService.cpp. CLOSED TREE 2019-04-05 09:48:19 +03:00
Sylvestre Ledru d1c1878603 Bug 1519636 - clang-format-8: Reformat recent changes to the Google coding style r=Ehsan
clang-format-8 upstream had some improvements wrt macros
See: https://reviews.llvm.org/D33440
This is why the diff is bigger than usual

# ignore-this-changeset

Differential Revision: https://phabricator.services.mozilla.com/D26098

--HG--
extra : moz-landing-system : lando
2019-04-04 21:36:16 +00:00
Narcis Beleuzu 24dbe577a5 Backed out changeset 389b6bbd76db (bug 1519636) for bustages on MarkupMap.h . CLOSED TREE 2019-04-05 00:27:56 +03:00
Sylvestre Ledru 399dbd28fe Bug 1519636 - clang-format-8: Reformat recent changes to the Google coding style r=Ehsan
clang-format-8 upstream had some improvements wrt macros
See: https://reviews.llvm.org/D33440
This is why the diff is bigger than usual

# ignore-this-changeset

Differential Revision: https://phabricator.services.mozilla.com/D26098

--HG--
extra : moz-landing-system : lando
2019-04-04 20:12:23 +00:00
Ryan Hunt f4a515c179 Bug 1523969 part 27 - Move method definition inline comments to new line in 'xpcom/'. r=froydnj
Differential Revision: https://phabricator.services.mozilla.com/D21131

--HG--
extra : rebase_source : 514f36238d908de221a0116f8e8a5d0cf18a168c
extra : histedit_source : 85743d2586a7307738866ce145b93dae2a664cf3
2019-02-25 16:14:01 -06:00
Nicholas Nethercote f476546aaa Bug 1530311 - Add a length check to HashTable::reserve(). r=luke
Also add an assertion to a similar function in PLDHashTable.cpp.

Differential Revision: https://phabricator.services.mozilla.com/D21167

--HG--
extra : moz-landing-system : lando
2019-02-27 00:08:13 +00:00
Sylvestre Ledru 265e672179 Bug 1511181 - Reformat everything to the Google coding style r=ehsan a=clang-format
# ignore-this-changeset

--HG--
extra : amend_source : 4d301d3b0b8711c4692392aa76088ba7fd7d1022
2018-11-30 11:46:48 +01:00
Nathan Froyd 790af83dcc Bug 1460674 - part 3 - make PLDHashTable iteration faster; r=njn
The core loop of Iterator::Next() requires multiple branches, one to
check for entry liveness and one to check for wraparound.  We can
rewrite this to use masking instead, as well as iterating only over the
hashes, and reconstructing the entry pointer when we know we've reached
a live entry.  This change cuts the time taken on the collections
benchmark by the iteration portion in half.
2018-11-26 16:24:50 -05:00
Nathan Froyd 529e9249dd Bug 1460674 - part 2 - reorganize PLDHashTable's internal storage; r=njn
As discussed in the previous commit message, PLDHashTable's storage
wastes space for common entry types.  This commit reorganizes the
storage to store all the hashes packed together, followed by all the
entries, which eliminates said waste.
2018-11-26 16:24:50 -05:00
Nathan Froyd d848c8a50e Bug 1460674 - part 1 - change PLDHashTable internals to work on slots; r=njn
PLDHashTable requires that all items stored therein inherit from
PLDHashEntryHdr:

struct PLDHashEntryHdr {
  // PLDHashNumber is a uint32_t.
  PLDHashNumber mKeyHash; // Cached hash key for object.
};

class MyType : public PLDHashEntryHdr {
  // Data members, etc.
};

PLDHashEntryHdr::mKeyHash is used to cache the computed hash value of
the entry, so we aren't rehashing entries on every lookup/add/etc.

Because of structure layout requirements on 64-bit platforms, the data
members of MyType will typically start at offset 8:

MyType, offset 0: mKeyHash
MyType, offset 4: padding required by alignment
MyType, offset 8: first data member of MyType
MyType, offset N: ...

The padding at offset 4 is dead, unused space.

We'd like to change this state of affairs by having PLDHashTable manage
the cached hash key itself, which would eliminate the dead space in the
object and would enable packing the table storage more tightly.  But
PLDHashTable pervasively assumes that its internal storage is an array
of PLDHashEntryHdrs (with an associated object size to account for
subclassing).

As a first step to laying out the hash table storage differently, we
have to make the internals of PLDHashTable not access PLDHashEntryHdr
items directly, but layer an abstraction on top.  We call this
abstraction "slots": a slot represents storage for the cached hash key
and the associated entry, and can produce a PLDHashEntryHdr* on demand.
2018-11-26 16:24:50 -05:00
Nathan Froyd 7c8ce06436 Bug 1460674 - part 0b - stop trying to be const-correct in Get(); r=njn
The only place where this is used where the const-ness matters is in
AddressEntry, and that use const_cast's away the const-ness.  So let's
just ditch the method that attempts to return const pointers.
2018-11-26 16:24:50 -05:00
Nathan Froyd d98ab29fdb Bug 1460674 - part 0a - store the hash table entry size in iterators; r=njn
This change is satisfying insofar as it removes a load from every
iteration step over the hashtable, but it doesn't seem to affect
our collection benchmarks in any way.  The change does not make
iterators any larger, as there is padding at the end of the iterator
structure on both 32- and 64-bit platforms.
2018-11-26 16:24:50 -05:00
Nathan Froyd 1ae4750ecb Bug 1506730 - remove PLDHashTable::Iterator::mStart; r=njn
We only use its value in one place, and said value is easily computable
from readily available information.  This change makes iterators
slightly smaller.
2018-11-19 09:54:04 -05:00
Nicholas Nethercote 0f205a7ce0 Bug 1477626 - Move ScrambleHashCode() from js/src/Utility.h to mfbt/HashFunctions.h. r=Waldo
And use it in PLDHashTable.cpp.

MozReview-Commit-ID: BqwEkE0p5AG

--HG--
extra : rebase_source : bd9118e24b82add6ad1fdcb067a5f25b25e90201
2018-07-26 18:52:47 +10:00
Nicholas Nethercote 25a1140207 Bug 1477626 - Introduce mozilla::HashNumber and use it in various places. r=Waldo
Currently we have three ways of representing hash values.

- uint32_t: used in HashFunctions.h.

- PLDHashNumber: defined in PLDHashTable.{h,cpp}.

- js::HashNumber: defined in js/public/Utility.h.

Functions that create hash values with functions from HashFunctions.h use a mix
of these three types. It's a bit of a mess.

This patch introduces mozilla::HashNumber, and redefines PLDHashNumber and
js::HashNumber as synonyms. It also changes HashFunctions.h to use
mozilla::HashNumber throughout instead of uint32_t.

This leaves plenty of places that still use uint32_t that should use
mozilla::HashNumber or one of its synonyms, but I didn't want to tackle that
now.

The patch also:

- Does similar things for the constants defining the number of bits in each
  hash number type.

- Moves js::HashNumber from Utility.h to HashTable.h, which is a better spot
  for it. (This required changing the signature of ScrambleHashCode(); that's
  ok, it'll get moved by the next patch anyway.)

MozReview-Commit-ID: EdoWlCm7OUC

--HG--
extra : rebase_source : 5b92c0c3560eb56850cd8832f8ee514d25e3c16f
2018-07-26 18:52:46 +10:00
Nicholas Nethercote 85db906378 Bug 1477632 - Always inline PLDHashTable::SearchTable(). r=froydnj
This speeds up BenchCollections.PLDHash as follows:

>     succ_lookups      fail_lookups     insert_remove           iterate
>          42.8 ms           51.6 ms           21.0 ms           34.9 ms
>          41.8 ms           51.9 ms           20.0 ms           34.6 ms
>          41.6 ms           51.3 ms           19.3 ms           34.5 ms
>          41.6 ms           51.3 ms           19.8 ms           35.0 ms
>          41.7 ms           50.8 ms           20.5 ms           35.2 ms

After:

>     succ_lookups      fail_lookups     insert_remove           iterate
>          37.7 ms           33.1 ms           19.7 ms           35.0 ms
>          37.0 ms           32.5 ms           19.1 ms           35.4 ms
>          37.6 ms           33.6 ms           19.2 ms           36.2 ms
>          36.7 ms           33.3 ms           19.1 ms           35.3 ms
>          37.1 ms           33.1 ms           19.1 ms           35.0 ms

Successful lookups are about 1.13x faster, failing lookups are about 1.54x
faster, and insertions/removals are about 1.05x faster.

On Linux64, this increases the size of libxul (as measured by `size`) by a mere
16 bytes.

--HG--
extra : rebase_source : 4bfdee68ba40a0cdd5396964d1647a70fb10736a
2018-07-24 11:09:35 +10:00
Brian Hackett ecc6572b9d Bug 1207696 Part 8f - Ensure that PL and PLD hashtables have consistent iteration order when recording/replaying, r=froydnj.
--HG--
extra : rebase_source : 5ed9fb1339c88f99214bc4159eefa34383263e94
2018-07-23 14:47:55 +00:00
Nathan Froyd 0c5c5f4d2f Bug 1468514 - make resizing PLDHashTable smarter; r=erahm
The current code for resizing PLDHashTable modifies the cached hash for
all entries in the old hash table.  This is unnecessary, because we're
going to throw away the old hash table shortly, and inefficient, because
writing to memory we're never going to use again just wastes time and
memory bandwidth.  Instead, let's avoid the write by pulling out the
cached key and doing the necessary manipulation on local variables,
which is probably slightly faster.
2018-07-23 10:30:57 -05:00
Emilio Cobos Álvarez 09e04b0f0f Bug 1475980: A moved table should be empty. r=froydnj
MozReview-Commit-ID: 7K9wNGhIhaD
2018-07-16 18:06:37 +02:00
Masayuki Nakano 78077d5b3c Bug 1475461 - part 1: Mark PLDHashTable::Search() and called by it as const r=Ehsan
PLDHashTable::Search() does not modify any members.  So, this method and
methods called by it should be marked as const.

MozReview-Commit-ID: 6g4jrYK1j9E

--HG--
extra : rebase_source : eda6c50c538fec0e8c09cb2ba629735eea6ec711
2018-07-13 16:56:29 +09:00
Emilio Cobos Álvarez fffb25b74f Bug 1465585: Switch from mozilla::Move to std::move. r=froydnj
This was done automatically replacing:

  s/mozilla::Move/std::move/
  s/ Move(/ std::move(/
  s/(Move(/(std::move(/

Removing the 'using mozilla::Move;' lines.

And then with a few manual fixups, see the bug for the split series..

MozReview-Commit-ID: Jxze3adipUh
2018-06-01 10:45:27 +02:00
Eric Rahm 098e642086 Bug 1452202 - Clean up PLDHashTable move operator. r=froydnj
--HG--
extra : rebase_source : 2fe44e09de738ffa827b8c7fd51162eaa8c53890
2018-04-09 11:01:59 -07:00
Eric Rahm 691c3159a1 Bug 1452288 - Use calloc for allocating PLDHashTable entries. r=froydnj
--HG--
extra : rebase_source : 73f3aba71dc8ad8bc23786153f2a22cdbf86f8dd
2018-04-06 16:51:58 -07:00
Nicholas Nethercote 225f3a87e4 Bug 1400193 (part 2) - Shrink PLDHashTable. r=froydnj.
This patch reduces sizeof(PLDHashTable) as follows.

- 64-bit: from 40 bytes to 32
- 32-bit: from 28 bytes to 20

It does this by doing the following.

- It moves mGeneration from EntryStore to PLDHashTable, to avoid unnecessary
  padding on 64-bit. This requires tweaking EntryStore::Set() as explained in a
  comment.

- It also shrinks mGeneration from uint32_t to uint16_t, saving 2 bytes of
  data.

- It shrinks mEntrySize from uint32_t to uint8_t, to cut 3 bytes of data.

- It shrinks mHashShift from int16_t to uint8_t, trimming another byte of data,
  and moves it, saving another 2 bytes of padding.

And it reorders the fields so the word-sized ones are at the start, which makes
it easier to imagine the memory layout.

The patch also adds a test, and fixes some misordered function arguments in
existing tests.

--HG--
extra : rebase_source : 6ed6f7be68477fd4a82f07dd2f51c1f1d9b92dcc
2017-09-15 20:04:29 +10:00
Nicholas Nethercote 6061f358d7 Bug 1400193 (part 1) - Fix subtle bug in PLDHashTable's move constructor. r=froydnj.
The current code replaces one PLDHashTable's mGeneration with another. If the
two generation values happen to be equal, it will look as though the storage
hasn't changed when really it has.

The fix is to use EntryStore::Set() to update mEntryStore, which increments
mGeneration appropriately.

--HG--
extra : rebase_source : d1779a143746c8d1a3e67bc3685e703496206b0f
2017-09-15 20:02:33 +10:00
Ehsan Akhgari 43091e2839 Bug 1379282 - Improve XPCOM's pointer hashing functions for pointers to neighboring memory locations; r=erahm
The simplistic shift-based hashing function creates a lot of collisions
for pointers pointing to arrays as it doesn't do a great job at distributing
the data randomly based on the input bytes.
2017-07-19 14:16:35 -04:00
Ehsan Akhgari 3df4db262f Backout changeset 06f92d065a85 (bug 1377333) because we don't need keys that are that big 2017-07-18 23:06:00 -04:00
Ehsan Akhgari ec3d589e4f Bug 1377333 - Make PLDHashNumber 64-bit on x86-64; r=froydnj 2017-07-04 11:05:21 -04:00
Sebastian Hengst 51f7ac9c81 Backed out changeset 3009a0b538da (bug 1377333) on suspicion of causing frequent failures in test_general.html. r=backout 2017-07-04 00:37:24 +02:00
Ehsan Akhgari de718c51ec Bug 1377333 - Make PLDHashNumber 64-bit on x86-64; r=froydnj 2017-07-03 13:21:11 -04:00
Ehsan Akhgari 04f49b5fc2 Bug 1376563 - Improve the hash key generation for hashtables containing pointers on 64-bit platforms by using 2 more bits of the original pointer in calculating the hash key; r=froydnj 2017-06-27 18:18:32 -04:00
L. David Baron 07c2f2dc49 Bug 1352889 - Ensure that PLDHashTable's second hash doesn't have padding with 0 bits for tables with capacity larger than 2^16. r=njn
PLDHashTable takes the result of the hash function and multiplies it by
kGoldenRatio to ensure that it has a good distribution of bits across
the 32-bit hash value, and then zeroes out the low bit so that it can be
used for the collision flag.  This result is called hash0.  From hash0
it computes two different numbers used to find entries in the table
storage:  hash1 is used to find an initial position in the table to
begin searching for an entry; hash2 is then used to repeatedly offset
that position (mod the size of the table) to build a chain of positions
to search.

In a table with capacity 2^c entries, hash1 is simply the upper c bits
of hash0.  This patch does not change this.

Prior to this patch, hash2 was the c bits below hash1, padded at the low
end with zeroes when c > 16.  (Note that bug 927705, changeset
1a02bec165e16f370cace3da21bb2b377a0a7242, increased the maximum capacity
from 2^23 to 2^26 since 2^23 was sometimes insufficient!)  This manner
of computing hash2 is problematic because it increases the risk of long
chains for very large tables, since there is less variation in the hash2
result due to the zero padding.

So this patch changes the hash2 computation by using the low bits of
hash0 instead of shifting it around, thus avoiding 0 bits in parts of
the hash2 value that are significant.

Note that this changes what hash2 is in all cases except when the table
capacity is exactly 2^16, so it does change our hashing characteristics.
For tables with capacity less than 2^16, it should be using a different
second hash, but with the same amount of random-ish data.  For tables
with capacity greater than 2^16, it should be using more random-ish
data.

Note that this patch depends on the patch for bug 1353458 in order to
avoid causing test failures.

MozReview-Commit-ID: JvnxAMBY711

--HG--
extra : transplant_source : 2%D2%C2%CE%E1%92%C8%F8H%D7%15%A4%86%5B%3Ac%0B%08%3DA
2017-05-31 13:44:02 -07:00
L. David Baron f22aab91c2 Bug 1352888 - Don't set the collision flag when adding to PLDHashTable if we've already found the entry we're going to add. r=njn
PLDHashTable's entry store has two types of unoccupied entries:  free
entries and removed entries.  The search of a chain of entries
(determined by the hash value) in the entry store to search for an entry
can stop at free entries, but it continues across removed entries,
because removed entries are entries that may have been skipped over when
we were adding the value we're searching for to the hash, but have since
been removed.  For live entries, we also maintain this distinction by
using one bit of storage for a collision flag, which notes that if the
hashtable entry is removed, its place in the entry store must become a
removed entry rather than a free entry.

When we add a new entry to the table, Add's semantics require that we
return an existing entry if there is one, and only create a new entry if
no existing entry exists.  (Bug 1352198 suggests the possibility of a
faster alternative Add API where the caller guarantees that the key is
not already in the hashtable.)  When we search for the existing entry,
we must thus continue the search across removed entries, even though we
record the first removed entry found to return if the search for an
existing entry fails.

The existing code adds the collision flag through the entire table
search during an Add.  This patch changes that behavior so that we only
add the collision flag prior to finding the first removed entry.  Adding
it after we find the first removed entry is unnecessary, since we are
not making that entry part of a path to a new entry.  If it is part of a
path to an existing entry, it will already have the collision flag set.

This patch effectively puts an if (!firstRemoved) around the else branch
of the if (MOZ_UNLIKELY(EntryIsRemoved(entry))), and then refactors that
condition outwards since it is now around the contents of both the if
and else branches.

MozReview-Commit-ID: CsXnMYttHVy

--HG--
extra : transplant_source : %80%9E%83%EC%CCY%B4%B0%86%86%18%99%B6U%21o%5D%29%AD%04
2017-05-31 13:44:02 -07:00
L. David Baron f2c24eea94 Backed out changeset e4ac2148c920 (bug 1352888) to see if it is responsible for input latency regression bug 1362094.
--HG--
extra : rebase_source : bfb451e911058496c2a6498d658cb6ab777da8e6
2017-05-24 11:42:19 -04:00
L. David Baron 34315c4eee Backed out changeset 52fff3b1e209 (bug 1352889) to see if it is responsible for input latency regressions in bug 1365334 or bug 1366156.
--HG--
extra : rebase_source : 387401d1417a29c19e5dbc1ee7413d9d6f810d23
2017-05-24 11:41:02 -04:00
L. David Baron 5191ef7b3b Bug 1352889 - Ensure that PLDHashTable's second hash doesn't have padding with 0 bits for tables with capacity larger than 2^16. r=njn
PLDHashTable takes the result of the hash function and multiplies it by
kGoldenRatio to ensure that it has a good distribution of bits across
the 32-bit hash value, and then zeroes out the low bit so that it can be
used for the collision flag.  This result is called hash0.  From hash0
it computes two different numbers used to find entries in the table
storage:  hash1 is used to find an initial position in the table to
begin searching for an entry; hash2 is then used to repeatedly offset
that position (mod the size of the table) to build a chain of positions
to search.

In a table with capacity 2^c entries, hash1 is simply the upper c bits
of hash0.  This patch does not change this.

Prior to this patch, hash2 was the c bits below hash1, padded at the low
end with zeroes when c > 16.  (Note that bug 927705, changeset
1a02bec165e16f370cace3da21bb2b377a0a7242, increased the maximum capacity
from 2^23 to 2^26 since 2^23 was sometimes insufficient!)  This manner
of computing hash2 is problematic because it increases the risk of long
chains for very large tables, since there is less variation in the hash2
result due to the zero padding.

So this patch changes the hash2 computation by using the low bits of
hash0 instead of shifting it around, thus avoiding 0 bits in parts of
the hash2 value that are significant.

Note that this changes what hash2 is in all cases except when the table
capacity is exactly 2^16, so it does change our hashing characteristics.
For tables with capacity less than 2^16, it should be using a different
second hash, but with the same amount of random-ish data.  For tables
with capacity greater than 2^16, it should be using more random-ish
data.

Note that this patch depends on the patch for bug 1353458 in order to
avoid causing test failures.

MozReview-Commit-ID: JvnxAMBY711

--HG--
extra : rebase_source : dbde93b9312dd10fb3a549ee4098b57e1df5cadf
2017-05-04 15:17:50 -07:00
L. David Baron 443904b36a Bug 1352888 - Don't set the collision flag when adding to PLDHashTable if we've already found the entry we're going to add. r=njn
PLDHashTable's entry store has two types of unoccupied entries:  free
entries and removed entries.  The search of a chain of entries
(determined by the hash value) in the entry store to search for an entry
can stop at free entries, but it continues across removed entries,
because removed entries are entries that may have been skipped over when
we were adding the value we're searching for to the hash, but have since
been removed.  For live entries, we also maintain this distinction by
using one bit of storage for a collision flag, which notes that if the
hashtable entry is removed, its place in the entry store must become a
removed entry rather than a free entry.

When we add a new entry to the table, Add's semantics require that we
return an existing entry if there is one, and only create a new entry if
no existing entry exists.  (Bug 1352198 suggests the possibility of a
faster alternative Add API where the caller guarantees that the key is
not already in the hashtable.)  When we search for the existing entry,
we must thus continue the search across removed entries, even though we
record the first removed entry found to return if the search for an
existing entry fails.

The existing code adds the collision flag through the entire table
search during an Add.  This patch changes that behavior so that we only
add the collision flag prior to finding the first removed entry.  Adding
it after we find the first removed entry is unnecessary, since we are
not making that entry part of a path to a new entry.  If it is part of a
path to an existing entry, it will already have the collision flag set.

This patch effectively puts an if (!firstRemoved) around the else branch
of the if (MOZ_UNLIKELY(EntryIsRemoved(entry))), and then refactors that
condition outwards since it is now around the contents of both the if
and else branches.

MozReview-Commit-ID: CsXnMYttHVy

--HG--
extra : transplant_source : W4%B8%BA%D5p%102%1B%8D%83%23%E0s%B3%B0f%0D%05%AE
2017-04-04 20:59:21 -07:00
Carsten "Tomcat" Book 92b960417c Backed out changeset 8f8e8cd713ad (bug 1352888) 2017-04-04 09:54:55 +02:00
Carsten "Tomcat" Book 6fa4a7de2c Backed out changeset dfdb5742823a (bug 1352889) 2017-04-04 09:54:53 +02:00
L. David Baron 4d700b54f1 Bug 1352889 - Ensure that PLDHashTable's second hash doesn't have padding with 0 bits for tables with capacity larger than 2^16. r=njn
PLDHashTable takes the result of the hash function and multiplies it by
kGoldenRatio to ensure that it has a good distribution of bits across
the 32-bit hash value, and then zeroes out the low bit so that it can be
used for the collision flag.  This result is called hash0.  From hash0
it computes two different numbers used to find entries in the table
storage:  hash1 is used to find an initial position in the table to
begin searching for an entry; hash2 is then used to repeatedly offset
that position (mod the size of the table) to build a chain of positions
to search.

In a table with capacity 2^c entries, hash1 is simply the upper c bits
of hash0.  This patch does not change this.

Prior to this patch, hash2 was the c bits below hash1, padded at the low
end with zeroes when c > 16.  (Note that bug 927705, changeset
1a02bec165e16f370cace3da21bb2b377a0a7242, increased the maximum capacity
from 2^23 to 2^26 since 2^23 was sometimes insufficient!)  This manner
of computing hash2 is problematic because it increases the risk of long
chains for very large tables, since there is less variation in the hash2
result due to the zero padding.

So this patch changes the hash2 computation by using the low bits of
hash0 instead of shifting it around, thus avoiding 0 bits in parts of
the hash2 value that are significant.

Note that this changes what hash2 is in all cases except when the table
capacity is exactly 2^16, so it does change our hashing characteristics.
For tables with capacity less than 2^16, it should be using a different
second hash, but with the same amount of random-ish data.  For tables
with capacity greater than 2^16, it should be using more random-ish
data.

MozReview-Commit-ID: JvnxAMBY711

--HG--
extra : transplant_source : %8A%25%FB%E3H%B8_%F1G%F6%3E%0B%29%DF%20%FF%D8%E1%AEw
2017-04-03 20:43:30 -07:00