This was done by:
This was done by applying:
```
diff --git a/python/mozbuild/mozbuild/code-analysis/mach_commands.py b/python/mozbuild/mozbuild/code-analysis/mach_commands.py
index 789affde7bbf..fe33c4c7d4d1 100644
--- a/python/mozbuild/mozbuild/code-analysis/mach_commands.py
+++ b/python/mozbuild/mozbuild/code-analysis/mach_commands.py
@@ -2007,7 +2007,7 @@ class StaticAnalysis(MachCommandBase):
from subprocess import Popen, PIPE, check_output, CalledProcessError
diff_process = Popen(self._get_clang_format_diff_command(commit), stdout=PIPE)
- args = [sys.executable, clang_format_diff, "-p1", "-binary=%s" % clang_format]
+ args = [sys.executable, clang_format_diff, "-p1", "-binary=%s" % clang_format, '-sort-includes']
if not output_file:
args.append("-i")
```
Then running `./mach clang-format -c <commit-hash>`
Then undoing that patch.
Then running check_spidermonkey_style.py --fixup
Then running `./mach clang-format`
I had to fix four things:
* I needed to move <utility> back down in GuardObjects.h because I was hitting
obscure problems with our system include wrappers like this:
0:03.94 /usr/include/stdlib.h:550:14: error: exception specification in declaration does not match previous declaration
0:03.94 extern void *realloc (void *__ptr, size_t __size)
0:03.94 ^
0:03.94 /home/emilio/src/moz/gecko-2/obj-debug/dist/include/malloc_decls.h:53:1: note: previous declaration is here
0:03.94 MALLOC_DECL(realloc, void*, void*, size_t)
0:03.94 ^
0:03.94 /home/emilio/src/moz/gecko-2/obj-debug/dist/include/mozilla/mozalloc.h:22:32: note: expanded from macro 'MALLOC_DECL'
0:03.94 MOZ_MEMORY_API return_type name##_impl(__VA_ARGS__);
0:03.94 ^
0:03.94 <scratch space>:178:1: note: expanded from here
0:03.94 realloc_impl
0:03.94 ^
0:03.94 /home/emilio/src/moz/gecko-2/obj-debug/dist/include/mozmemory_wrap.h:142:41: note: expanded from macro 'realloc_impl'
0:03.94 #define realloc_impl mozmem_malloc_impl(realloc)
Which I really didn't feel like digging into.
* I had to restore the order of TrustOverrideUtils.h and related files in nss
because the .inc files depend on TrustOverrideUtils.h being included earlier.
* I had to add a missing include to RollingNumber.h
* Also had to partially restore include order in JsepSessionImpl.cpp to avoid
some -WError issues due to some static inline functions being defined in a
header but not used in the rest of the compilation unit.
Differential Revision: https://phabricator.services.mozilla.com/D60327
--HG--
extra : moz-landing-system : lando
rg -l 'mozilla/Move.h' | xargs sed -i 's/#include "mozilla\/Move.h"/#include <utility>/g'
Further manual fixups and cleanups to the include order incoming.
Differential Revision: https://phabricator.services.mozilla.com/D60323
--HG--
extra : moz-landing-system : lando
This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
Regardless of the size of an encoded image, SourceBuffer::Compact would
try to consolidate all of the chunks into a single chunk. If an image is
quite large, it can be actively harmful to do this, because we want a
very large contiguous chunk of memory for no real reason, and spend
extra time on the main thread doing the memcpy/consolidation.
Instead we now cap out the chunk size at 20MB. If we start allocating
chunks of this size, we will not perform compacting when we have
received all of the data. (Save for realloc'ing the last chunk since it
probably isn't full.)
On a related note, if we hit an out-of-memory condition in the middle of
appending data to the SourceBuffer, we would swallow the error. This is
because nsIInputStream::ReadSegments will succeed if any data was
written. This leaves the SourceBuffer out of sync. We now propogate this
error up properly to the higher levels.
fixup
We should use AutoTArray instead because telemetry shows that the vast
majority of images loaded (95%+) contain only a single chunk when
decoding finishes. Even if part 2 of this bug increases the number of
images loaded with multiple chunks, we still call SourceBuffer::Compact
after the decode is complete, which typically will reduce the number of
chunks to 1 (unless memory is very low and we fail to consolidate the
chunks). Thus it should be rare to contain more than 1 chunk on
anything but a temporary basis, and we can easily save the malloc
overhead.
Note that SourceBuffer::AppendChunk still uses the fallible variant of
nsTArray::AppendElement.
SourceBuffer::Compact attempts to consolidate multiple, discrete
allocations into a single buffer, as well as trim excess capacity from a
singular allocation if we set aside too much. Using realloc lets
jemalloc (or whatever heap implementation we have) decide which is
better -- growing the existing buffer if there is sufficient free memory
contiguous with the first chunk, or allocating a new buffer entirely.
Since we were going to copy regardless, this should result either in an
improvement or the status quo. Brief empirical testing on Linux suggests
somewhere from 1/3 to 1/2 of allocations resulted in reusing the same
data pointer (and presumably avoided a copy as a result). This also has
the advantage of potentially reducing OOM errors, as it may have enough
room to satisfy an expansion, but not an entirely new buffer.
The ICO decoder creates a cloned SourceBufferIterator for its own
SourceBuffer bounded by the resource size. This iterator is used by the
child decoder (PNG, BMP) for decoding the actual image. However we rely
upon the ICO decoder and its iterator to drive event loop, rather than
the child decoder and the cloned iterator. The cloned iterator knows how
many bytes it requires, but it is problematic to give it a consumer to
tell us when to resume without changes to StreamingLexer.
Without a consumer (IResumable), we won't have anything to notify when
we get the appropriate amount of data for the caller. If the caller
tries to advance after some, unknown amount of data has been written to
the SourceBuffer, then it may need to go back to waiting. Thus it should
only assert for a spurious wakeup if we have an actual consumer.
The bulk of this commit was generated with a script, executed at the top
level of a typical source code checkout. The only non-machine-generated
part was modifying MFBT's moz.build to reflect the new naming.
CLOSED TREE makes big refactorings like this a piece of cake.
# The main substitution.
find . -name '*.cpp' -o -name '*.cc' -o -name '*.h' -o -name '*.mm' -o -name '*.idl'| \
xargs perl -p -i -e '
s/nsRefPtr\.h/RefPtr\.h/g; # handle includes
s/nsRefPtr ?</RefPtr</g; # handle declarations and variables
'
# Handle a special friend declaration in gfx/layers/AtomicRefCountedWithFinalize.h.
perl -p -i -e 's/::nsRefPtr;/::RefPtr;/' gfx/layers/AtomicRefCountedWithFinalize.h
# Handle nsRefPtr.h itself, a couple places that define constructors
# from nsRefPtr, and code generators specially. We do this here, rather
# than indiscriminantly s/nsRefPtr/RefPtr/, because that would rename
# things like nsRefPtrHashtable.
perl -p -i -e 's/nsRefPtr/RefPtr/g' \
mfbt/nsRefPtr.h \
xpcom/glue/nsCOMPtr.h \
xpcom/base/OwningNonNull.h \
ipc/ipdl/ipdl/lower.py \
ipc/ipdl/ipdl/builtin.py \
dom/bindings/Codegen.py \
python/lldbutils/lldbutils/utils.py
# In our indiscriminate substitution above, we renamed
# nsRefPtrGetterAddRefs, the class behind getter_AddRefs. Fix that up.
find . -name '*.cpp' -o -name '*.h' -o -name '*.idl' | \
xargs perl -p -i -e 's/nsRefPtrGetterAddRefs/RefPtrGetterAddRefs/g'
if [ -d .git ]; then
git mv mfbt/nsRefPtr.h mfbt/RefPtr.h
else
hg mv mfbt/nsRefPtr.h mfbt/RefPtr.h
fi
--HG--
rename : mfbt/nsRefPtr.h => mfbt/RefPtr.h
This commit was generated using the following script, executed at the
top level of a typical source code checkout.
# Don't modify select files in mfbt/ because it's not worth trying to
# tease out the dependencies currently.
#
# Don't modify anything in media/gmp-clearkey/0.1/ because those files
# use their own RefPtr, defined in their own RefCounted.h.
find . -name '*.cpp' -o -name '*.h' -o -name '*.mm' -o -name '*.idl'| \
grep -v 'mfbt/RefPtr.h' | \
grep -v 'mfbt/nsRefPtr.h' | \
grep -v 'mfbt/RefCounted.h' | \
grep -v 'media/gmp-clearkey/0.1/' | \
xargs perl -p -i -e '
s/mozilla::RefPtr/nsRefPtr/g; # handle declarations in headers
s/\bRefPtr</nsRefPtr</g; # handle local variables in functions
s#mozilla/RefPtr.h#mozilla/nsRefPtr.h#; # handle #includes
s#mfbt/RefPtr.h#mfbt/nsRefPtr.h#; # handle strange #includes
'
# |using mozilla::RefPtr;| is OK; |using nsRefPtr;| is invalid syntax.
find . -name '*.cpp' -o -name '*.mm' | xargs sed -i -e '/using nsRefPtr/d'
# RefPtr.h used |byRef| for dealing with COM-style outparams.
# nsRefPtr.h uses |getter_AddRefs|.
# Fixup that mismatch.
find . -name '*.cpp' -o -name '*.h'| \
xargs perl -p -i -e 's/byRef/getter_AddRefs/g'
Various bits depend on RefPtr.h to provide RefCounted<T> and RefPtr<T>.
It will be easier to manage an automatic conversion from RefPtr<T> to
nsRefPtr<T> if we split out the dependency on RefCounted<T> first.