This removes the last uses of PR_smprintf from the tree (excluding the
security and nsprpub directories). It also fixes a related latent bug
in nsAppRunner.cpp (which was incorrectly freeing the pointer passed to
PR_SetEnv).
MozReview-Commit-ID: GynP2PhuWWO
--HG--
extra : rebase_source : c3b83c7bd08b1c222e137a00323caf5481352845
PLDHashTable's entry store has two types of unoccupied entries: free
entries and removed entries. The search of a chain of entries
(determined by the hash value) in the entry store to search for an entry
can stop at free entries, but it continues across removed entries,
because removed entries are entries that may have been skipped over when
we were adding the value we're searching for to the hash, but have since
been removed. For live entries, we also maintain this distinction by
using one bit of storage for a collision flag, which notes that if the
hashtable entry is removed, its place in the entry store must become a
removed entry rather than a free entry.
When we add a new entry to the table, Add's semantics require that we
return an existing entry if there is one, and only create a new entry if
no existing entry exists. (Bug 1352198 suggests the possibility of a
faster alternative Add API where the caller guarantees that the key is
not already in the hashtable.) When we search for the existing entry,
we must thus continue the search across removed entries, even though we
record the first removed entry found to return if the search for an
existing entry fails.
The existing code adds the collision flag through the entire table
search during an Add. This patch changes that behavior so that we only
add the collision flag prior to finding the first removed entry. Adding
it after we find the first removed entry is unnecessary, since we are
not making that entry part of a path to a new entry. If it is part of a
path to an existing entry, it will already have the collision flag set.
This patch effectively puts an if (!firstRemoved) around the else branch
of the if (MOZ_UNLIKELY(EntryIsRemoved(entry))), and then refactors that
condition outwards since it is now around the contents of both the if
and else branches.
MozReview-Commit-ID: CsXnMYttHVy
--HG--
extra : transplant_source : W4%B8%BA%D5p%102%1B%8D%83%23%E0s%B3%B0f%0D%05%AE
We need this because the stored values in the hash table may themselves
be main-thread only objects, and destroying them off the main thread
will cause crashes.
PLDHashTable takes the result of the hash function and multiplies it by
kGoldenRatio to ensure that it has a good distribution of bits across
the 32-bit hash value, and then zeroes out the low bit so that it can be
used for the collision flag. This result is called hash0. From hash0
it computes two different numbers used to find entries in the table
storage: hash1 is used to find an initial position in the table to
begin searching for an entry; hash2 is then used to repeatedly offset
that position (mod the size of the table) to build a chain of positions
to search.
In a table with capacity 2^c entries, hash1 is simply the upper c bits
of hash0. This patch does not change this.
Prior to this patch, hash2 was the c bits below hash1, padded at the low
end with zeroes when c > 16. (Note that bug 927705, changeset
1a02bec165e16f370cace3da21bb2b377a0a7242, increased the maximum capacity
from 2^23 to 2^26 since 2^23 was sometimes insufficient!) This manner
of computing hash2 is problematic because it increases the risk of long
chains for very large tables, since there is less variation in the hash2
result due to the zero padding.
So this patch changes the hash2 computation by using the low bits of
hash0 instead of shifting it around, thus avoiding 0 bits in parts of
the hash2 value that are significant.
Note that this changes what hash2 is in all cases except when the table
capacity is exactly 2^16, so it does change our hashing characteristics.
For tables with capacity less than 2^16, it should be using a different
second hash, but with the same amount of random-ish data. For tables
with capacity greater than 2^16, it should be using more random-ish
data.
MozReview-Commit-ID: JvnxAMBY711
--HG--
extra : transplant_source : %8A%25%FB%E3H%B8_%F1G%F6%3E%0B%29%DF%20%FF%D8%E1%AEw
PLDHashTable's entry store has two types of unoccupied entries: free
entries and removed entries. The search of a chain of entries
(determined by the hash value) in the entry store to search for an entry
can stop at free entries, but it continues across removed entries,
because removed entries are entries that may have been skipped over when
we were adding the value we're searching for to the hash, but have since
been removed. For live entries, we also maintain this distinction by
using one bit of storage for a collision flag, which notes that if the
hashtable entry is removed, its place in the entry store must become a
removed entry rather than a free entry.
When we add a new entry to the table, Add's semantics require that we
return an existing entry if there is one, and only create a new entry if
no existing entry exists. (Bug 1352198 suggests the possibility of a
faster alternative Add API where the caller guarantees that the key is
not already in the hashtable.) When we search for the existing entry,
we must thus continue the search across removed entries, even though we
record the first removed entry found to return if the search for an
existing entry fails.
The existing code adds the collision flag through the entire table
search during an Add. This patch changes that behavior so that we only
add the collision flag prior to finding the first removed entry. Adding
it after we find the first removed entry is unnecessary, since we are
not making that entry part of a path to a new entry. If it is part of a
path to an existing entry, it will already have the collision flag set.
This patch effectively puts an if (!firstRemoved) around the else branch
of the if (MOZ_UNLIKELY(EntryIsRemoved(entry))), and then refactors that
condition outwards since it is now around the contents of both the if
and else branches.
MozReview-Commit-ID: CsXnMYttHVy
--HG--
extra : transplant_source : 0T%B0%FA%C0%85v%8B%16%E7%81%03p%F5K%97%B1%9E%92%27
This adds an arena allocator that can be used as a drop-in replacement for
NSPR's PLArena. Example usage for defining an 8-byte aligned allocator that
uses a 4K arena size:
mozilla::ArenaAllocator<4096,8> a;
void* memory = a.Allocate(200);
This adds release bounds checking to ReplaceElementsAt, InsertElementAt, and
InsertElementsAt to make sure the insertion point is within the current array
bounds.
MozReview-Commit-ID: 1pFr8LuOROI
Handling potential nsDeque size changes means a bit of extra work.
But if the nsDeque is const, we can assume that it shouldn't get modified, so
we can provide a more optimized iterator that doesn't need to handle size
changes.
Optimizing a range-for loop in which the deque is not modified, can be done
by writing: `for (void* item : const_cast<const nsDeque&>(deque)) {...}`
MozReview-Commit-ID: AFupjoTsoH3
--HG--
extra : rebase_source : a71b09c9cb73787ce686c7c762f92ef0c208e76a
Note that iterators stay at the same index if the deque size changes
(including end-iterators staying at the end).
This means that after front operations, iterators will effectively point at
different elements! (Possibly skipping or re-visiting some.)
But this is consistent with ForEach and hand-crafted index-based for loops.
MozReview-Commit-ID: 5IvazJR68dG
--HG--
extra : rebase_source : c574fd2d2642d784482698c0fc861269200d1059
It's now possible to write:
for (void* item : deque) { ... }
MozReview-Commit-ID: FLoczCZd77y
--HG--
extra : rebase_source : 237293e94b478beb2bf352c1179d42c289dda145
This could have been done more simply, but the small amount of
refactoring that takes place in this comment enables better error
messages in the case where something does go wrong.