Currently, huge allocations are completely independent from arenas. But
in order to ensure that e.g. moz_arena_realloc can't reallocate huge
allocations from another arena, we need to track which arena was
responsible for the huge allocation. We do that in the corresponding
extent_node_t.
Both functions do essentially the same thing, one having more validation
than the other. We can use a template with a boolean parameter to avoid
the duplication.
Furthermore, we're soon going to require, in some cases, more
information than just the size of the allocation, so we wrap their
result in a helper class that gives information about an active
allocation.
FloorLog2 expands to, essentially, a compiler builtin/intrinsic, that,
in turn, expands to a single machine instruction on tier 1 and other
platforms. On platforms where that's not the case, we can expect the
compiler to generate fast code anyways. So overall, this is all better
than manually using a log2 lookup table.
Also replace a manual power-of-two check with mozilla::IsPowerOfTwo,
which does the same test.
--HG--
extra : rebase_source : e8164c254723c74ef83e798073327ec6afa6f1fb
They are both only used once, are trivial wrappers, and even repeat the
same assertions.
--HG--
extra : rebase_source : b40b26e303cb69a451e63937efd8a666053954e5
There are multiple flaws to the current code:
- The loop calculating the right parameters for a given run size is
repeated.
- The loop trying different run sizes doesn't actually work to fulfil
the overhead constraint: while it stops when the constraint is
fulfilled, the values that are kept are those from the previous
iteration, which may well be well over the constraint.
In practice, the latter resulted in a few surprising results:
- most size classes had an overhead slightly over the constraint
(1.562%), which, while not terribly bad, doesn't match the set
expectations.
- some size classes ended up with relatively good overheads only because
of the additional constraint that run sizes had to be larger than the
run size of smaller size classes. Without this constraint, some size
classes would end up with overheads well over 2% just because that
happens to be the last overhead value before reaching below the 1.5%
constraint.
Furthermore, for higher-level fragmentation concerns, smaller run sizes
are better than larger run sizes, and in many cases, smaller run sizes
can yield the same (or even sometimes, better) overhead as larger run
sizes. For example, the current code choses 8KiB for runs of size 112,
but using 4KiB runs would actually yield the same number of regions, and
the same overhead.
We thus change the calculation to:
- not force runs to be smaller than those of smaller classes.
- avoid the code repetition.
- actually enforce its overhead constraint, but make it 1.6%.
- for especially small size classes, relax the overhead constraint to
2.4%.
This leads to an uneven set of run sizes:
size class before after
4 4 KiB 4 KiB
8 4 KiB 4 KiB
16 4 KiB 4 KiB
32 4 KiB 4 KiB
48 4 KiB 4 KiB
64 4 KiB 4 KiB
80 4 KiB 4 KiB
96 4 KiB 4 KiB
112 8 KiB 4 KiB
128 8 KiB 8 KiB
144 8 KiB 4 KiB
160 8 KiB 8 KiB
176 8 KiB 4 KiB
192 12 KiB 4 KiB
208 12 KiB 8 KiB
224 12 KiB 4 KiB
240 12 KiB 4 KiB
256 16 KiB 16 KiB
272 16 KiB 4 KiB
288 16 KiB 4 KiB
304 16 KiB 12 KiB
320 20 KiB 12 KiB
336 20 KiB 4 KiB
352 20 KiB 8 KiB
368 20 KiB 4 KiB
384 24 KiB 8 KiB
400 24 KiB 20 KiB
416 24 KiB 16 KiB
432 24 KiB 12 KiB
448 28 KiB 4 KiB
464 28 KiB 16 KiB
480 28 KiB 8 KiB
496 28 KiB 20 KiB
512 32 KiB 32 KiB
1024 64 KiB 64 KiB
2048 132 KiB 128 KiB
* Note: before is before this change only, not before the set of changes
from this bug; before that, the run size for 96 could be 8 KiB in some
configurations.
In most cases, the overhead hasn't changed, with a few exceptions:
- Improvements:
size class before after
208 1.823% 0.977%
304 1.660% 1.042%
320 1.562% 1.042%
400 0.716% 0.391%
464 1.283% 0.879%
480 1.228% 0.391%
496 1.395% 0.703%
- Regressions:
352 0.312% 1.172%
416 0.130% 0.977%
2048 1.515% 1.562%
For the regressions, the values are either still well within the
constraint or very close to the previous value, that I don't feel like
it's worth trying to avoid them, with the risk of making things worse
for other size classes.
--HG--
extra : rebase_source : fdff18df8a0a35c24162313d4adb1a1c24fb6e82
On 64-bit platforms, sizeof(arena_run_t) includes a padding at the end
of the struct to align to 64-bit, since the last field, regs_mask, is
32-bit, and its offset can be a multiple of 64-bit depending on the
configuration. But we're doing size calculations for a dynamically-sized
regs_mask based on sizeof(arena_run_t), completely ignoring that
padding.
Instead, we use the offset of regs_mask as a base for the calculation.
Practically speaking, this doesn't change much with the current set of
values, but could affect the overheads when we squeeze run sizes more.
--HG--
extra : rebase_source : a3bdf10a507b81aa0b2b437031b884e18499dc8f
This makes the run header larger than necessary, which happens to make
the current arena_bin_run_calc_size pick 8KiB runs for size class 96
when MOZ_DIAGNOSTIC_ASSERT_ENABLED is set. This change makes it pick
4KiB runs, making MOZ_DIAGNOSTIC_ASSERT_ENABLED builds use the same set
of run sizes as non-MOZ_DIAGNOSTIC_ASSERT_ENABLED builds.
--HG--
extra : rebase_source : fd7ef2d58ec601186647799e9dcf8146e723241c
First and foremost, the code and corresponding comment weren't in
agreement on what's going on.
The code checks:
RUN_MAX_OVRHD * (bin->mSizeClass << 3) <= RUN_MAX_OVRHD_RELAX
which is equivalent to:
(bin->mSizeClass << 3) <= RUN_MAX_OVRHD_RELAX / RUN_MAX_OVRHD
replacing constants:
(bin->mSizeClass << 3) <= 0x1800 / 0x3d
The left hand side is just bin->mSizeClass * 8, and the right hand side
is about 100, so this can be roughly summarized as:
bin->mSizeClass <= 12
The comment says the overhead constraint is relaxed for runs with a
per-region overhead greater than RUN_MAX_OVRHD / (mSizeClass << (3+RUN_BFP)).
Which, on itself, doesn't make sense, because it translates to
61 / (mSizeClass * 32768), which, even for a size class of 1 would mean
less than 0.2%, and this value would be even smaller for bigger classes.
The comment would make more sense with RUN_MAX_OVRHD_RELAX, but would
still not match what the code was doing.
So we change how the relaxed rule works, as per the comment in the new
code, and make it happen after the standard run overhead constraint has
been checked.
--HG--
extra : rebase_source : cec35b5bfec416761fbfbcffdc2b39f0098af849
The description above the RUN_* constant definitions talks about binary
fixed point math, which is one way to look at the problem, but a clearer
one is to look at it as comparing ratios in a way that doesn't use
divisions.
So, starting from the current expression:
(try_reg0_offset << RUN_BFP) <= RUN_MAX_OVRHD * try_run_size
This can be rewritten as
try_reg0_offset * (1 << RUN_BFP) <= RUN_MAX_OVRHD * try_run_size
Dividing both sides with ((1 << RUN_BFP) * try_run_size), and
simplifying, gives us:
try_reg0_offset / try_run_size <= RUN_MAX_OVRHD / (1 << RUN_BFP)
Replacing the constants:
try_reg0_offset / try_run_size <= 0x3d / (1 << 12)
or
try_reg0_offset / try_run_size <= 61 / 4096
61 / 4096 is roughly 1.5%.
So what the check really intends to do is check that the overhead is
below 1.5%.
So we introduce a helper class and a user-defined literal that makes the
test more self-descriptive, while producing identical machine code.
This is a lot of code to add, but I think it's one of those cases where
abstraction can help make the code clearer.
--HG--
extra : rebase_source : 3d4a94f524a60e40ba75859c4f761f59d689e81a
This is, practically speaking, a no-op, and will hopefully help make the
following changes clearer.
--HG--
extra : rebase_source : b704bdf2ae46c2408e0061363822b9744ef449cb
QUANTUM_2POW_MIN is exactly 4, and we are unlikely to ever make it
smaller. Also turn a MOZ_ASSERT into a static_assert, because it only
uses constants, and will fail if QUANTUM_2POW_MIN is lowered without
touching size_invs.
--HG--
extra : rebase_source : 7c8ee3c0ea30a88bddba816c41c6f63914f7a03c
There is a set of "constants" that are actually globals that depend on
the page size that we get at runtime, when compiling without
MALLOC_STATIC_PAGESIZE, but that are actual constants when compiling
with it. Their value was unfortunately duplicated.
We setup a set of macros allowing to make the declarations unique.
--HG--
extra : rebase_source : 56557b7ba01ee60fe85f2cd3c2a0aa910c4c93c6
At the same time, add user-defined literals to make those constants more
legible.
--HG--
extra : rebase_source : ce143ad9d8a6603179042d8cf432f00c815156c5
At the moment, while they are used before their declaration, it's from a
macro. It is desirable to replace the macros with C++ constants, which
will require the structures being defined first.
--HG--
extra : rebase_source : 7a351dafea04a7d75b6eec50fa52fb49c135e569
We create a new helper class that rounds up allocations sizes and
categorizes them. Compilers are smart enough to elide what they don't
need, like in malloc_good_size, they elide the code related to the
class type enum.
--HG--
extra : rebase_source : 61381e600587b045e720a85a7b46673edeb691b9
Because of alignment issues due to the system glibc when running the
SSE2 gcov code generated during the PGO profile gen phase, Firefox
crashes when running the PGO profile. We work around the issue by
disabling SSE2 when building mozjemalloc during that phase. That
shouldn't affect the coverage data anyways, which is bound to the
original C++ code, and the profile-use code generation will still emit
SSE2 based on the coverage data if it needs to.
--HG--
extra : rebase_source : 3596fdc795cdef0789f3a2dd8f10b42cde00430f
We introduce the notion of private arenas, separate from other arenas
(main and thread-local). They are kept in a separate arena tree, and
arena lookups from moz_arena_* functions only access the tree of
private arenas. Iteration still goes through all arenas, private and
non-private.
--HG--
extra : rebase_source : 86c43c7c920b01eb6fa1fa214d612fd9220eac3e
We create the ArenaCollection class to handle operations on the
arena tree. Ideally, iter() would trigger locking, but the
prefork/postfork code complicates things, so we leave this for later.
--HG--
extra : rebase_source : bd7021098baf0ec01c14063294098edea4473d36
Note we use a local variable for fallible allocator because using plain
`new (fallible)` would require some figuring out for non-Firefox builds
(e.g. standalone js).
--HG--
extra : rebase_source : 2132f98ebc7e37a139b673f80631e672bcf8ed15
RedBlackTree::{Insert,Remove} allocate an object on the stack for its
RedBlackTreeNode, and that shouldn't have side effects if the type
happens to have a constructor. This will allow to add constructors to
some of the mozjemalloc types.
--HG--
extra : rebase_source : 14dbb7d73c86921701d83156186df5d645530dda
We introduce the notion of private arenas, separate from other arenas
(main and thread-local). They are kept in a separate arena tree, and
arena lookups from moz_arena_* functions only access the tree of
private arenas. Iteration still goes through all arenas, private and
non-private.
--HG--
extra : rebase_source : ec48631a4a65520892331c1fcd62db37ed35ba1d
We create the ArenaCollection class to handle operations on the
arena tree. Ideally, iter() would trigger locking, but the
prefork/postfork code complicates things, so we leave this for later.
--HG--
extra : rebase_source : 90c96575d65c920f75aa621ba119d354d1ce252a
Note we use the deprecated `new (fallible_t())` form because using `new
(fallible)` would require some figuring out for non-Firefox builds (e.g.
standalone js).
--HG--
extra : rebase_source : 0159d8476a1b5509330517c00af6c387d522722d
RedBlackTree::{Insert,Remove} allocate an object on the stack for its
RedBlackTreeNode, and that shouldn't have side effects if the type
happens to have a constructor. This will allow to add constructors to
some of the mozjemalloc types.
--HG--
extra : rebase_source : 14dbb7d73c86921701d83156186df5d645530dda
- In the cases where it's used on powers of 2, replace it with
FloorLog2() + 1.
- In the cases where it's used on any kind of number, replace it with
CountTrailingZeroes, which is `ffs(x) - 1`.
- In the case of tiny allocations in arena_t::MallocSmall, we rearrange
the code so that the intent is clearer, which also simplifies the
expression for the mBins offset: mBins[0] is the first tiny bucket,
for allocations of sizes 1 << TINY_MIN_2POW, mBins[1] for allocations
of size 1 << (TINY_MIN_2POW + 1), etc. up to small_min. So the offset
is really the log2 of the normalized size.
--HG--
extra : rebase_source : 954a655dcaa93857dc976078e133704bb141de0d
Comparing ffs(x) == ffs(y), when x and y are guaranteed to be powers of
2 (or 0, or 1), is the same as x == y.
--HG--
extra : rebase_source : d6cc3399d85fa9fda2559435e99adbfb82ac8da0
The sentinel was taking as much space as one element of the tree, while
only really used for its RedBlackTreeNode, wasting space.
This results in some decrease in struct sizes, for example on 64-bits
linux:
- arena_bin_t: 80 -> 56
- arena_t (excluding mBins): 224 -> 144
- arena_t + dynamic size of mBins: 3024 -> 2104
It also decreases the size of several globals:
- gChunksBySize, gChunksByAddress, huge: 64 -> 8
- gArenaTree: 312 -> 8
--HG--
extra : rebase_source : d5bb52f93e064ab4cca3fb07b2c5a77ce57fb7db
Interestingly, this turns single-instruction checks into
two-instructions checks (at least with GCC, from one cmpb to a movl
followed by a testl), but this is due to Atomic<bool> being actually
backed by a uint32_t, not by the use of atomics.
--HG--
extra : rebase_source : cfc0bec2113b44635120216b4abbbbbe9028b286
- First, MOZ_DIAGNOSTIC_ASSERT_ENABLED is always true when MOZ_DEBUG is
set, so don't check for MOZ_DEBUG.
- Second, all the magic number assertions should be
MOZ_DIAGNOSTIC_ASSERTs instead of MOZ_ASSERTs.
--HG--
extra : rebase_source : 5601cd13604e21c46a9f0ad8b0b4d6fc399b853e
Some need initialization to happen, some can be skipped when the
allocator was not initialized, and others should crash.
--HG--
extra : rebase_source : d6c2697ca27f6110fe52a067440a0583e0ed0ccd
Also rearrange some code accordingly, but don't fix indentation issues
just yet.
Also apply changes from the google-readability-braces-around-statements
check.
But don't apply the modernize-use-nullptr recommendation about
strerror_r because it's wrong (bug #1412214).
--HG--
extra : histedit_source : 2d61af7074fbdc5429902d9c095c69ea30261769
Backed out changeset f75f427fde1d
Backed out changeset 6278aa5fec1d
Backed out changeset eefc284bbf13
Backed out changeset e2b391ae4688
Backed out changeset 58070c2511c6
--HG--
extra : histedit_source : d14fa171a5cf4d9400cae7f94d5cc64a1e58b98d%2C856ad5b650074a1dcff2edb0b95adc20aaf38db3
Also rearrange some code accordingly, but don't fix indentation issues
just yet.
Also apply changes from the google-readability-braces-around-statements
check.
But don't apply the modernize-use-nullptr recommendation about
strerror_r because it's wrong (bug #1412214).
--HG--
extra : rebase_source : 28b07aca01fd0275127341233755852cccda22fe
This will make allocation operations return nullptr in the face of OOM,
allowing callers to either handle the allocation error or for the normal
OOM machinery, which also records the requested size, to kick in.
--HG--
extra : rebase_source : 723048645cb3f0db269c91f9d023bb06825a817b
On its own, this change is unnecessary. But it prepares for pages_commit
becoming fallible in next commit, allowing a possibly early return while
leaving metadata in a consistent state: runs are still available for
future use even if they couldn't be committed immediately.
--HG--
extra : rebase_source : 971e5c49882409c00ac61ec641d469dcc94e5cc2
Bug 1403843 made more things constant, but missed a few that don't
depend on the page size.
--HG--
extra : rebase_source : 036722744ff7054de9d081bde1f4c7b035fd9501
- Move variable declarations to their initialization.
- Remove gotos and use RAII.
--HG--
extra : rebase_source : 9d983452681edf63593d033727ba6faebe418afe
The chunk_recycle and chunk_record functions are never called with
different red-black trees than the globals, so just use them directly
instead of passing them as argument. The functions were already using
the associated global mutex anyways.
At the same time, rename them.
--HG--
extra : rebase_source : c45bb2e584c61b458eab4343562eb3a5a64543a3
Instead of calling it with a boolean indicating whether the call was for
base allocations or not, and return immediately if it was, avoid the
call altogether.
--HG--
extra : rebase_source : abb2a3d0eaefc16efd2e828f09a330ab2a3b8b1f
jemalloc_ptr_info takes an outparam, which makes it harder to use in a
debugger: you'd need to find some memory to use as outparam and pass
that in.
So for convenience, we add a non-exported symbol for use in debuggers,
which just returns a pointer to a static buffer for the result.
lldb:
(lldb) print *Debug::jemalloc_ptr_info($0)
(jemalloc_ptr_info_t) $1 = (tag = TagLiveSmall, addr=0x000000011841dd80, size = 160)
gdb:
(gdb) print *Debug::jemalloc_ptr_info($0)
$1 = {tag = TagLiveSmall, addr = 0x7f8e7ebd0dc0, size = 96}
windbg:
0:040> .call Debug::jemalloc_ptr_info(0x6187880)
Thread is set up for call, 'g' will execute.
WARNING: This can have serious side-effects,
including deadlocks and corruption of the debuggee.
0:040> g
.call returns:
struct jemalloc_ptr_info_t * 0x7501f3f4
+0x000 tag : 1 ( TagLiveSmall )
+0x004 addr : 0x06187880 Void
+0x008 size : 0x20
--HG--
extra : rebase_source : 09aedd48aabee3e273a17000a61b1d09cdd619b9
Bug 1378258 removed malloc_print_stats and bug 1379890 further removed
the subsequently unused arena stats. It turns out there are also some
huge stats that have been unused since bug 1378258, and that are still
there, so remove them.
--HG--
extra : rebase_source : ae71c7507143503dff8d2e517352a97eb53e4676
The way inlining is disabled in mozjemalloc is via a #define of "inline"
to nothing, which is a dubious way to do that. This makes the compiler
trigger warnings we -Werror on for some static functions. While there
are such functions in mozjemalloc.cpp that could be fixed by wrapping
them in the right #ifdefs, there are also others coming from headers,
and it's not something that can be fixed in a satisfactory way.
The right way to disable inlining is to pass the right compiler flags
for that. But inlining is the least of the problems to debug optimized
C++ code, so it feels like if debugging requires some optimization
tweaking, it should be done manually with compile flags when needed,
instead of fiddling with #defines to remove keywords.
--HG--
extra : rebase_source : 962c3409f86060c4d5ddf966778b58b64f89c31d
Bug 1403444 massively refactored the red-black tree code, with the
result of removing the warnings the old code was triggering. We can thus
remove the exceptions for those warnings now.
--HG--
extra : rebase_source : 76c7ce7a7282471399c7592601f6986bfb33b256
Ideally, we'd be reusing some Mutex class we have in Gecko, the base one
in mozglue/misc being the best candidate. However, the contraints in
mozjemalloc make that unconvenient:
- Can't have a constructor because malloc_init() would likely run before
it, and that would mean the mutexes would be re-initialized.
- Can't have a destructor because code will run after static
destructors, and some of that code likely will invoke the allocator,
and we can't have destructed mutexes by then.
- Can't use pthread_mutex on OSX because that loops back into the
allocator.
Accomodating the use of Gecko mutexes around those constraints would
mean much more code than just implementing a new mutex class, so the
latter is preferred.
--HG--
extra : rebase_source : d2e180a5007390c620aa6d7921340b9784c7699f
The malloc_spin_* functions have ended up being strictly identical to
the malloc_mutex_* functions, so use the latter instead of the former.
--HG--
extra : rebase_source : 746bdf57cb4a33fd65335174a748cb567630e05b
Now that the radix tree structure has a fixed size, we can just allocate
the chunk radix tree object statically.
--HG--
extra : rebase_source : 6a5f022d46da1b24401b197751e594903987b7f6
All the parameters of the radix tree (bits per level, height) are
derived from the aBits argument to ::Create in a straightforward way.
aBits itself is a constant at the call point, making them all constants,
so we can turn all of them as constants at compile time instead of
storing as data.
--HG--
extra : rebase_source : aa1be8e97ed4133d7fc106fb3ea678a759476bef
All levels except the first are using the same size, and in some cases,
even the first uses the same size. Only storing those two different
sizes allows to fix the class size, while not making the code
significantly more complex.
--HG--
extra : rebase_source : 8028c18de2fa84060c5baff7c95cd0a70e7a3c6b
The tree height was defined as:
height = aBits / bits_per_level;
if (height * bits_per_level != aBits) {
height++;
}
What's wanted here is a height that covers all the bits, where the first
level might cover less than bits_per_level.
So aBits / bits_per_level gets us the height covered by levels with
exactly bits_per_level bits. The tree height is one more when there
are remaining bits.
Put differently, we can write aBits as:
aBits = bits_per_level * x + y
with y < bits_per_level.
We have:
aBits / bits_per_level = x.
height = x when y = 0, and x + 1 when y > 0.
We're looking for a number z such that
height = (aBits + z) / bits_per_level.
Or:
height = (bits_per_level * x + y + z) / bits_per_level.
= x + (y + z) / bits_per_level.
So we're looking for a z such that
(y + z) / bits_per_level = 0 when y = 0
= 1 when y > 0
The properties of the integer division are such that the above means:
0 <= y + z < bits_per_level when y = 0
bits_per_level <= y + z < 2 * bits_per_level when y > 0
Which gives us:
0 <= z < bits_per_level
bits_per_level - y <= z < 2 * bits_per_level - y when y > 0
y being < bit_per_level per the constraint further above,
2 * bits_per_level - y > bits_per_level.
So all in all, we want a z such that
bits_per_level - y <= z < bits_per_level with 0 < y < bits_per_level
The largest value where this is true is z = bits_per_level - 1.
In summary,
height = (aBits + bits_per_level - 1) / bits_per_level
is the same as the height as originally defined.
With that formula, it's self evident that height * bits_per_level is
always >= aBits, so we remove the assertion.
--HG--
extra : rebase_source : 8ca2e5fbad7d4ad537f26508af5aa250483f1f08
bits_per_level was defined as:
ffs(pow2_ceil((kNodeSize / sizeof(void*)))) - 1
kNodeSize is (1U << 14) when SIZEOF_PTR is 4 (sizeof(void*) being the
same). Otherwise, it's CACHELINE, which is (1U << 6).
The most important part, though, is that it's always a power of 2.
And it's divided by sizeof(void*) which is always a power or 2.
The result of that division is thus always a power of 2, as long as
kNodeSize is larger than the size of a pointer, which it is.
The argument to pow2_ceil being a power of 2, pow2_ceil is a noop,
so it can go away. And the argument to ffs being a power of 2, it
returns one more than n that matches 1 << n == value. So overall
the expression returns the number of shifts for
kNodeSize / SIZEOF_PTR.
Transforming kNodeSize to a number of shifts/power of 2, the expression
can then be simplified as kNodeSize2Pow - SIZEOF_PTR_2POW.
--HG--
extra : rebase_source : a22a378ba6622e2a4fbcf28811c7042cea9da24a
The only semantic change is in the value returned by Set, which now
returns whether the value could be set or not.
--HG--
extra : rebase_source : a80f5d6fdb3672715887e69215f55df0cedb231e
There is a lot of redundancy between malloc_rtree_get and
malloc_rtree_set. Essentially, they both look up a slot, and either get
a value or set a value in that slot. malloc_rtree_get doesn't create a
tree path for the slot when it doesn't exist. And the
MALLOC_RTREE_GET_GENERATE macro machinery makes malloc_rtree_get retry
with a lock and validate both results agree in debug builds.
By introducing a malloc_rtree_get_slot function that returns a slot,
optionally creating a tree path to it, we remove the redundancy between
_get and _set, and we can avoid the macro machinery as well.
--HG--
extra : rebase_source : bbbdd33e81e8bfdc11c028f882ab877bba26f7f3
It seemingly hasn't been needed since Mac OS 10.7. A diagnostic assertion that
has been in place for a while hasn't caught any uses of it.
--HG--
extra : rebase_source : 9834849eec9174267c7df8de7fd22840ffa36d8f
Bug 571209 made many different kinds of sizes static at build time, as
opposed to configurable at run-time. While the dynamic sizes can be
useful to quickly test tweaks to e.g. quantum sizes, a
replace-malloc-built allocator could just as well do the same. This
need, however, is very rare, and doesn't justify keeping the sizes
dynamic on platforms where static sizes can't be used for the page size
because page size may vary depending on kernel options.
So we make every size that doesn't depend on the page size static,
whether MALLOC_STATIC_SIZES is enabled or not.
This makes no practical difference on tier-1 platforms, except Android
aarch64, which will benefit from more static sizes.
--HG--
extra : rebase_source : 28243a67e4fe41154c23dc39b45405479854d31d
This was done in bug 1104634 because back then the Android NDK had a
broken combination of compiler and libc, where the compiler would emit
calls to the ffs function, but the libc wouldn't contain them, but only
when building without optimization.
Things have changed in the meanwhile, and recent NDK doesn't have this
problem. So we can remove the hack.
--HG--
extra : rebase_source : 22d6c279a60d0d23161ca1addd5b5e9a3411d8ab
Bug 1402174 made all arenas registered in a Red-Black tree. Which means
they are iterable through that tree, making the arenas list now redundant.
The list is also inconvenient, since it needs to be constantly
reallocated, and the allocator in charge of the list doesn't know how to
free things.
Iteration of arenas is not on any hot path anyways, so even though
iterating the RB tree is slower, it doesn't matter.
So we remove the arenas list, and keep a direct pointer to the main
arena for convenience (instead of calling First() on the RB tree every
time)
--HG--
extra : rebase_source : 31f12b2de18a886eb4f8f078e11040aad3fdc800
As we're going to enable stylo on Android at some point, we'll have to
have thread local arenas there, which means Android needs to be using
thread local storage. Since Android is the last use of NO_TLS in the
allocator code base, remove it.
--HG--
extra : rebase_source : 658cbc94b4478950f683bd104b7e5da27cd08a2e
The trivial expansion of macros ended up creating cases like
expr.IsRed() ? NodeColor::Red : NodeColor::Black
which practically speaking, is the same as
expr.Color()
so we replace those.
There are also a bunch of expr.IsRed() == false, which are replaced with
expr.IsBlack() (adding that method at the same time)
--HG--
extra : rebase_source : ab50212ff80f0c0151e7df329d8933ccd45f9781
While we're going in the opposite direction, moving away from macros,
upcoming intermediate steps are going to "manually" expand macros, but
later steps will require changing how the link field reference is done,
and having it in a single location then will be more convenient.
--HG--
extra : rebase_source : 6dde414ce392924081a41b7e3f66ae848cb14be5
That stack space would matter if recursion was involved, but there
isn't any, and a max of 1440 bytes temporarily allocated on the stack
is not really a problem.
--HG--
extra : rebase_source : 2968fafe9d604d9e6c03ac93c21d8a3a087043a4
All uses of rb_wrap have "static" as first argument to rb_wrap, move that
in the macro itself.
--HG--
extra : rebase_source : cbfe87d0539452c044b415c725cb7ce6ebb5628c
Things left for followups:
- Full cleanup of disposed arenas: bug 1364359.
- Random arena Ids: bug 1402282.
- Enforcing the arena to match on moz_arena_{realloc,free}: bug 1402283.
- Make it impossible to use arenas not created with moz_create_arena
with moz_arena_* functions: bug 1402284.
Until it's proven to require a different data structure, arena lookup by
Id is done through the use of the same RB tree used everywhere else in
the allocator.
At this stage, the implementation of the API doesn't ride the trains,
but the API can be used safely and will fall back to normal allocations
transparently for the caller.
--HG--
extra : rebase_source : aaa9bdab5b4e0c534da0c9c7a299028fc8d66dc8
Things left for followups:
- Full cleanup of disposed arenas: bug 1364359.
- Random arena Ids: bug 1402282.
- Enforcing the arena to match on moz_arena_{realloc,free}: bug 1402283.
- Make it impossible to use arenas not created with moz_create_arena
with moz_arena_* functions: bug 1402284.
Until it's proven to require a different data structure, arena lookup by
Id is done through the use of the same RB tree used everywhere else in
the allocator.
At this stage, the implementation of the API doesn't ride the trains,
but the API can be used safely and will fall back to normal allocations
transparently for the caller.
--HG--
extra : rebase_source : 089e4cbb62c239713f40763ab819c79e5cbe28ce
The implementation is not doing anything just yet. This will be done in
a followup bug.
--HG--
extra : rebase_source : e301eac77c6bd8247c09d369074ecb8d7b5a1a2f
malloc, free, calloc, realloc and memalign constitute some sort of
minimal interface to the allocator. posix_memalign, aligned_alloc and
valloc are already defined in terms of memalign. The remaining functions
are not related to active allocation.
--HG--
extra : rebase_source : ee27ca70e271f3abef76c7782724d607b52f58b1
This effectively means malloc_hook_table_t is now C++ only, which is not
a big problem.
This also makes some functions use a return construct with functions
that don't return a value (such as free). While that is not allowed in
ISO C, it's allowed in C++, so the simplification is welcome (although,
retrospectively, it turns out C compilers don't complain about it
without -pedantic).
--HG--
extra : rebase_source : defd88ca3f6d478e61a4b970393dba60fb6ca81d
This was done in bug 736564 for the xulrunner SDK, which later became
the firefox SDK, which is now gone. So we don't actually need to keep it
separate anymore (except for logalloc/replay, which still needs to link
it directly, so we keep the library definition intact so it can be
referenced ; we just don't DIST_INSTALL it anymore, and always make it
linked into mozglue)
--HG--
extra : rebase_source : e4d0627ec907fe0139df5c0b2b9f7d04b43c7c78
It is one of the moving parts when adding new memory allocation APIs.
It was added in bug 1168719 and the only thing that actually used it
was the sampling-based memory profiler, which was removed in bug
1385953. We however keep the replace-malloc bridge entry point so that
something else, in the future, may still provide the feature.
--HG--
extra : rebase_source : dd4a226171429e2a4ab5666b0873e7b945f161e6
In bug 1361258, we unified the initialization sequence on mac, and
chose to make the zone registration happen after jemalloc
initialization.
The order between jemalloc init and zone registration shouldn't actually
matter, because jemalloc initializes the first time the allocator is
actually used.
On the other hand, in some build setups (e.g. with light optimization),
the initialization of the thread_arena thread local variable can happen
after the forced jemalloc initialization because of the order the
corresponding static initializers run. In some levels of optimization,
the thread_arena initializer resets the value the jemalloc
initialization has set, which subsequently makes choose_arena() return
a bogus value (or hit an assertion in ThreadLocal.h on debug builds).
So instead of initializing jemalloc from a static initializer, which
then registers the zone, we instead register the zone and let jemalloc
initialize itself when used, which increases the chances of the
thread_arena initializer running first.
--HG--
extra : rebase_source : 4d9a5340d097ac8528dc4aaaf0c05bbef40b59bb
isalloc_validate is the function behind malloc_usable_size. If for some
reason malloc_usable_size is called before mozjemalloc is initialized,
this can lead to an unexpected crash.
The chance of this actually happening is rather slim on Linux
and Windows (although still possible), and impossible on Mac, due to the
fact the earlier something can end up calling it is after the
mozjemalloc zone is registered, which happens after initialization.
... except with bug 1399921, which reorders that initialization, and
puts the zone registration first. There's then a slim chance for the
zone allocator to call into zone_size, which calls malloc_usable_size,
to determine whether a pointer allocated by some other zone belongs to
mozjemalloc's.
And it turns out that does happen, during the startup of the
plugin-container process on OSX 10.10 (but not more recent versions).
--HG--
extra : rebase_source : 331d093b03add7b2c2ce440593f5aeccaaf4dd1f
Now that this is a C++ file, and that the function names are not
mangled, we can just use the actual C++ names.
We do however need to replace MOZ_MEMORY_API, which implies extern "C",
with MFBT_API.
Also use the correct type for the size given to operator new. It
happened to work before because the generated code would just jump to
malloc without touching any register, but on aarch64, unsigned int was
the wrong type.
--HG--
extra : rebase_source : 8045f30e9c609dd7d922c77d85ac017638df6961
And since the build system doesn't handle transitions from foo.c to
foo.cpp properly without a clobber, we work around the issue by
switching to unified sources.
--HG--
rename : memory/build/mozmemory_wrap.c => memory/build/mozmemory_wrap.cpp
extra : rebase_source : 3f074b4ccab255bb0eb16841f79582060fafbc86
It happens to work because of mozglue.def, but we might as well have the
right annotations (which will also make things correct when building this
file to C++)
--HG--
extra : rebase_source : 61056dc21c9c29bab62ad5d648e94dd56dc53b14
This used to be necessary because those functions might be prefixed with
__wrap_, and linked against with -Wl,--wrap, but that's not been the case
since bug 1077366. Furthermore, mozmem_malloc_impl nowadays only does
something on Windows, and those operators are only exposed on Android.
--HG--
extra : rebase_source : ca34442bfbc5fc8be20ffcfacb9afa0f2f818b82
In bug 1361258, we unified the initialization sequence on mac, and
chose to make the zone registration happen after jemalloc
initialization.
The order between jemalloc init and zone registration shouldn't actually
matter, because jemalloc initializes the first time the allocator is
actually used.
On the other hand, in some build setups (e.g. with light optimization),
the initialization of the thread_arena thread local variable can happen
after the forced jemalloc initialization because of the order the
corresponding static initializers run. In some levels of optimization,
the thread_arena initializer resets the value the jemalloc
initialization has set, which subsequently makes choose_arena() return
a bogus value (or hit an assertion in ThreadLocal.h on debug builds).
So instead of initializing jemalloc from a static initializer, which
then registers the zone, we instead register the zone and let jemalloc
initialize itself when used, which increases the chances of the
thread_arena initializer running first.
--HG--
extra : rebase_source : 4d9a5340d097ac8528dc4aaaf0c05bbef40b59bb
There is a lot of churn involved in adding new API surface to
mozjemalloc, and mozmemory.h is one. Instead of declaring
everything manually in there, "generate" the declarations through
malloc_decls.h.
--HG--
extra : rebase_source : 1416fa972319c419112c4a8b16759d90692db5b2
mozalloc_abort is not marked as a noreturn function on ARM, so clang
complains when abort calls mozalloc_abort, which calls MOZ_CRASH, which
calls abort(). We know this is OK, so just disable the warning.
The bin-unused count in memory reports indicates how much memory is
used by runs of small and sub-page allocations that is not actually
allocated. This is generally thought as an indicator of fragmentation.
While this is generally true, with the use of thread local arenas by
stylo, combined with how stylo allocates memory, it ends up also being
an indicator of wasted memory.
For instance, over the lifetime of an AWSY iteration, there are only a
few allocations that ends up in the bucket for 2048 allocated bytes. In
the "worst" case, there's only one. But the run size for such
allocations is 132KiB. Which means just because we're allocating one
buffer of size between 1024 and 2048 bytes, we end up wasting 130+KiB.
Per thread.
Something similar happens with size classes of 512 and 1024, where the
run size is respectively 32KiB and 64KiB, and where there's at most a
handful of allocations of each class ever happening per thread.
Overall, an allocation log from a full AWSY iteration reveals that there
are only 448 of 860700 allocations happening on the stylo arenas that
involve sizes above (and excluding) 512 bytes, so 0.05%.
While there are improvements that can be done to mozjemalloc so that it
doesn't waste more than one page per sub-page size class, they are
changes that are too low-level to land at this time of the release
cycle. However, considering the numbers above and the fact that the
stylo arenas are only really meant to avoid lock contention during the
heavy parallel work involved, a short term, low risk, strategy is to
just delegate all sub-page (> 512, < 4096) and large (>= 4096) to the
main arena. Technically speaking, only sub-page allocations are causing
this waste, but it's more consistent to just delegate everything above
512 bytes.
This should save 132KiB + 64KiB = 196KiB per stylo thread.
--HG--
extra : rebase_source : c7233d60305365e76aa124045b1c9492068d9415
Until bug 1361258, there was only ever one mozjemalloc arena, and the
number of dirty pages we allow to be kept dirty, fixed to 1MB per arena,
was, in fact, 1MB for an entire process.
With stylo using thread local arenas, we now can have multiple arenas
per process, multiplying that number of dirty pages.
While those dirty pages may be reused later on, when other allocations
end up filling them later on, the fact that a relatively large number of
them is kept around for each stylo thread (in proportion to the amount of
memory ever allocated by stylo), combined with the fact that the memory
use from stylo depends on the workload generated by the pages being
visited, those dirty pages may very well not be used for a rather long
time. This is less of a problem with the main arena, used for most
everything else.
So, for each arena except the main one, we decrease the number of dirty
pages we allow to be kept around to 1/8 of the current value. We do this
by introducing a per-arena configuration of that maximum number.
--HG--
extra : rebase_source : 75eebb175b3746d5ca1c371606cface50ec70f2f
Those macros are one more thing that needs to be added when the
mozjemalloc API surface is increased, but after bug 1399350, nothing
actually needs them, so remove them.
--HG--
extra : rebase_source : 2bf62cc6c179540482722a72b0d0c134d2ac2a19
The files relevant to the memory allocator are currently spread between
memory/mozjemalloc and memory/build, and the distinction was
historically from sharing some Mozilla-specific things between
mozjemalloc and jemalloc3. That distinction is not useful anymore, so
we fold everything together.
As we will likely rename the allocator at some point in the future, it
is preferable to move away from the mozjemalloc directory rather than in
its direction.
--HG--
rename : memory/mozjemalloc/Makefile.in => memory/build/Makefile.in
rename : memory/mozjemalloc/mozjemalloc.cpp => memory/build/mozjemalloc.cpp
rename : memory/mozjemalloc/mozjemalloc.h => memory/build/mozjemalloc.h
rename : memory/mozjemalloc/mozjemalloc_types.h => memory/build/mozjemalloc_types.h
rename : memory/mozjemalloc/rb.h => memory/build/rb.h
Mozjemalloc uses its own doubly linked list, which, being inherited from
C code, doesn't do much type checking, and, in practice, is rather
similar to DoublyLinkedList, so use the latter instead.
--HG--
extra : rebase_source : 9eb7334b6dde05f9af0eaea4184e532c69d0264e
clang warns about this code in mozmemory_wrap.c in the reimplementation
of vasprintf, complaining that `fmt` cannot be null:
if (str == NULL || fmt == NULL) {
^~~ ~~~~
clang is apparently exploiting knowledge about the requirements of
vasprintf here, but defensive programming on the part of our
reimplementation seems like the wiser course. Let's turn off the
warning.
Previously being a C codebase, mozjemalloc was using typedefs, avoiding
long "struct foo" uses. C++ doesn't need typedefs to achieve that, so we
can remove all that. We however keep a few typedefs in headers that are
still included from other C source files.
--HG--
extra : rebase_source : d0d807bcb76078c0ce36e4554b10803bfb36ddbb
Some system libraries call malloc_zone_free directly instead of free,
and sometimes they do that with the wrong zone. When that happens, we
circle back, trying to find the right zone, and call malloc_zone_free
with the right one, but when we can't find one, we crash, which matches
what the system free() would do. Except in one case where the pointer
we're being passed is NULL, in which case we can't trace it back to any
zone, but shouldn't crash (system free() explicitly doesn't crash in
that case).
--HG--
extra : rebase_source : 17efdcd80f1a53be7ab6b7293bfb6060a9aa4a48
Practically speaking, with code inlining, this doesn't change anything,
but will allow, in a subsequent change, to easily divert the exported
allocation functions (malloc, etc.) in order to inject replace-malloc
in the pipeline.
The added macro magic is copied from replace-malloc.c.
At the same time, reformat the functions we're touching.
--HG--
extra : rebase_source : f615909101f832f3cc471e23a3cc44a60886843f
Because malloc_decls.h is meant to be included multiple times, it
shouldn't actually declare things itself.
--HG--
extra : rebase_source : 9d6f9b2c61407265377845963a19ace2614160f4
valloc is supposed to page-align data, but mozjemalloc's definition of
pagesize is statically compiled in and might not match the actual page
size at runtime (because of MALLOC_STATIC_SIZES). We change valloc in
mozjemalloc to use the runtime page size.
--HG--
extra : rebase_source : c5b1b56e783b311ac1620a87d910e019e3f18b49
While MOZ_MEMORY_BSD is set for kFreeBSD, FreeBSD and NetBSD, the only
use of MOZ_MEMORY_BSD is along a check for __GLIBC__ which can only be
true on GNU/kFreeBSD, which doesn't have a XP_ macro, but which we
usually match with __FreeBSD_kernel__.
--HG--
extra : rebase_source : 2f143f8ed641f4e338024a58e20c62b2b30e653a
For some reason, we don't have a XP_ macro for Android, and we generally use
ANDROID or MOZ_WIDGET_ANDROID. The former is more appropriate.
--HG--
extra : rebase_source : 50576584f194aaf6b090fd530598ce9954e170b3
Back when it was added (for Windows CE, in bug 488608), mozjemalloc was
C and all the supported compilers didn't support C99 bools. Now
mozjemalloc is C++, and all the supported compilers support C99 bools
for the cases where the type is used from C.
--HG--
extra : rebase_source : b9c710a0c48dc36cb473af59e3119131d13523ce
Back when mozjemalloc was considered third-party, and before bug
1365194, mozjemalloc was calling abort() and that was redirectory to
MOZ_CRASH through a moz_abort() function. Bug 1365194 changed that so
that moz_abort is called directly instead of abort, but the indirection
is actually not necessary anymore.
So we just kill moz_abort, which is unused anywhere else, and use
MOZ_CRASH directly.
Note this will (obviously) change crash signatures involving moz_abort.
--HG--
extra : rebase_source : 67698ffd8c5e52e62b9a0b7f28efb0352c8fe8ce
This patch moves measurement of ComputedValues objects from Rust to C++.
Measurement now happens (a) via DOM elements and (b) remaining elements via
the frame tree. Likewise for the style structs hanging off ComputedValues
objects.
Here is an example of the output.
> ├──27,600,448 B (26.49%) -- active/window(https://en.wikipedia.org/wiki/Barack_Obama)
> │ ├──12,772,544 B (12.26%) -- layout
> │ │ ├───4,483,744 B (04.30%) -- frames
> │ │ │ ├──1,653,552 B (01.59%) ── nsInlineFrame
> │ │ │ ├──1,415,760 B (01.36%) ── nsTextFrame
> │ │ │ ├────431,376 B (00.41%) ── nsBlockFrame
> │ │ │ ├────340,560 B (00.33%) ── nsHTMLScrollFrame
> │ │ │ ├────302,544 B (00.29%) ── nsContinuingTextFrame
> │ │ │ ├────156,408 B (00.15%) ── nsBulletFrame
> │ │ │ ├─────73,024 B (00.07%) ── nsPlaceholderFrame
> │ │ │ ├─────27,656 B (00.03%) ── sundries
> │ │ │ ├─────23,520 B (00.02%) ── nsTableCellFrame
> │ │ │ ├─────16,704 B (00.02%) ── nsImageFrame
> │ │ │ ├─────15,488 B (00.01%) ── nsTableRowFrame
> │ │ │ ├─────13,776 B (00.01%) ── nsTableColFrame
> │ │ │ └─────13,376 B (00.01%) ── nsTableFrame
> │ │ ├───3,412,192 B (03.28%) -- servo-style-structs
> │ │ │ ├──1,288,224 B (01.24%) ── Display
> │ │ │ ├────742,400 B (00.71%) ── Position
> │ │ │ ├────308,736 B (00.30%) ── Font
> │ │ │ ├────226,512 B (00.22%) ── Background
> │ │ │ ├────218,304 B (00.21%) ── TextReset
> │ │ │ ├────214,896 B (00.21%) ── Text
> │ │ │ ├────130,560 B (00.13%) ── Border
> │ │ │ ├─────81,408 B (00.08%) ── UIReset
> │ │ │ ├─────61,440 B (00.06%) ── Padding
> │ │ │ ├─────38,176 B (00.04%) ── UserInterface
> │ │ │ ├─────29,232 B (00.03%) ── Margin
> │ │ │ ├─────21,824 B (00.02%) ── sundries
> │ │ │ ├─────20,080 B (00.02%) ── Color
> │ │ │ ├─────20,080 B (00.02%) ── Column
> │ │ │ └─────10,320 B (00.01%) ── Effects
> │ │ ├───2,227,680 B (02.14%) -- computed-values
> │ │ │ ├──1,182,928 B (01.14%) ── non-dom
> │ │ │ └──1,044,752 B (01.00%) ── dom
> │ │ ├───1,500,016 B (01.44%) ── text-runs
> │ │ ├─────492,640 B (00.47%) ── line-boxes
> │ │ ├─────326,688 B (00.31%) ── frame-properties
> │ │ ├─────301,760 B (00.29%) ── pres-shell
> │ │ ├──────27,648 B (00.03%) ── pres-contexts
> │ │ └─────────176 B (00.00%) ── style-sets
The 'servo-style-structs' and 'computed-values' sub-trees are new. (Prior to
this patch, ComputedValues under DOM elements were tallied under the the
'dom/element-nodes' sub-tree, and ComputedValues not under DOM element were
ignored.) 'servo-style-structs/sundries' aggregates all the style structs that
are smaller than 8 KiB.
Other notable things done by the patch are as follows.
- It significantly changes the signatures of the methods measuring nsINode and
its subclasses, in order to handle the tallying of style structs separately
from element-nodes. Likewise for nsIFrame.
- It renames the 'layout/style-structs' sub-tree as
'layout/gecko-style-structs', to clearly distinguish it from the new
'layout/servo-style-structs' sub-tree.
- It adds some FFI functions to access various Rust-side data structures from
C++ code.
- There is a nasty hack used twice to measure Arcs, by stepping backwards from
an interior pointer to a base pointer. It works, but I want to replace it
with something better eventually. The "XXX WARNING" comments have details.
- It makes DMD print a line to the console if it sees a pointer it doesn't
recognise. This is useful for detecting when we are measuring an interior
pointer instead of a base pointer, which is bad but easy to do when Arcs are
involved.
- It removes the Rust code for measuring CVs, because it's now all done on the
C++ side.
MozReview-Commit-ID: BKebACLKtCi
--HG--
extra : rebase_source : 4d9a8c6b198a0ff025b811759a6bfa9f33a260ba
When recycling chunks, mozjemalloc tries to avoid keeping around more
than 128MB worth of chunks around, but it doesn't actually look at the
size of the chunks that are recycled, so if chunk larger than 128MB is
recycled, it is kept as a whole, going well over the limit.
The chunks are still properly reused, and further recycling doesn't
occur, but that can limit other mmap users from getting enough address
space.
With this change, mozjemalloc now doesn't keep more than 128MB, by
splitting the chunks it recycles if they are too large.
Note this was not a problem on Windows, where chunks larger than 1MB are
never recycled (per CAN_RECYCLE).
--HG--
extra : rebase_source : 6765fd30b78ca5ddc7d55aac861355d960e47828
When recycling chunks, mozjemalloc tries to avoid keeping around more
than 128MB worth of chunks around, but it doesn't actually look at the
size of the chunks that are recycled, so if chunk larger than 128MB is
recycled, it is kept as a whole, going well over the limit.
The chunks are still properly reused, and further recycling doesn't
occur, but that can limit other mmap users from getting enough address
space.
With this change, mozjemalloc now doesn't keep more than 128MB, by
splitting the chunks it recycles if they are too large.
Note this was not a problem on Windows, where chunks larger than 1MB are
never recycled (per CAN_RECYCLE).
--HG--
extra : rebase_source : f81e94c6960ba253037f77de1a39c9ff74651e80
The current default is 24, which is equal to the maximum number of stack frames
that DMD will record. And that's a terrible value because it splits up too many
related stack traces into separate records. There is no single best value, but
8 is a much better default.
--HG--
extra : rebase_source : c423fc4fe0e490ff6d58fa8f7116bc01c86a366e
Just one caller (in DMD) actually looks at it, and that's in an unimportant way
-- if the return value was false, mLength would be zero anyway.
--HG--
extra : rebase_source : 0463ab3765744742a9e854964342d631095fa55f
This patch does he following.
- Avoids some unnecessary casting.
- Renames the |bp| parameter as |aBp|.
- Makes the no-op FramePointerStackWalk() signature match the real one.
(Clearly it's dead code in all built configurations!)
--HG--
extra : rebase_source : 3fe606d1ff9b063294f4028ff884c20661ed9e0a
MozStackWalk() is different on Windows to the other platforms. It has two extra
arguments, which can be used to walk the stack of a different thread.
This patch makes those differences clearer. Instead of having a single function
and forbidding those two arguments on non-Windows, it removes those arguments
from MozStackWalk, and splits off MozStackWalkThread() which retains them. This
also allows those arguments to have more appropriate types (HANDLE instead of
uintptr_t; CONTEXT* instead of than void*) and names (aContext instead of
aPlatformData).
The patch also removes unnecessary reinterpret_casts for the aClosure argument
at a couple of MozStackWalk() callsites.
--HG--
extra : rebase_source : 111ab7d6426d7be921facc2264f6db86c501d127
Bug 1186064 removed most of it when we started requiring VS 2015u2, but
the "frex" function exported through mozglue.def.in was only used
through the MSVCRT being patched by fixcrt.py, which is not done anymore.
So the "frex" export is not used anymore, and so the "dumb_free_thunk"
function is not used anymore as well.
--HG--
extra : rebase_source : 879c469c317c8b6749410a4a476d6c951c9a1d0f
It causes abort() on errors from munmap/VirtualFree (which in practice
don't happen except maybe in case of memory corruption), and was only
enabled on debug builds.
--HG--
extra : rebase_source : fb0c8c82dc1e2d1f14a588a82144cf82e4f4f773
Since bug 1378258 remove malloc_print_stats, there are a bunch of
allocator stats that are now unused, reducing the memory footprint of
allocator metadata.
--HG--
extra : rebase_source : 337ef3b647c20119334b6576d591006f6bb3dd16
When initializing a new chunk for use as an arena, we started by zeroing
out the chunk (if that wasn't the case) and then initializing a new
arena chunk in there. It turns out this can have a noticeable overhead,
especially when e.g. the new arena chunk is used for a large allocation
filled out by something that is realloc()ated.
OTOH, the chunk recycle code only ever keeps zeroed or arena chunks
around (there is a "recycled" type too, but in practice, at the moment,
this means they were arena chunks before). Arena chunks that were
recycled were totally emptied, so all the runs they may contain will
contain zeroed-out or poisoned data. They also contain a header, that is
overwritten by the new arena chunk initialization.
This means we can get away with reusing non-zeroed recycled chunks
without zeroing them, as long as the arena chunk header marks the runs
as madvised instead of zeroed.
Code-wise, this would benefit from getting a ChunkType out of
chunk_alloc, but this would require more refactoring than I'm willing to
do at the moment.
Before returning a chunk, chunk_recycle calls pages_commit (when
MALLOC_DECOMMIT is enabled), which is guaranteed to zero the chunk.
The code further zeroing the chunk afterwards, which is now moved out to
chunk_alloc callers, never took advantage of that fact, duplicating the
effort of zeroing the chunk on Windows.
By indicating to the callers that the chunk has already been zeroed, we
allow callers to skip zeroing on their own.
The current code only allows chunk_calloc() callers to tell whether they
want zeroed memory or not, but some might be okay either way, assuming
they act accordingly afterwards. So move the zeroing out of chunk_alloc.
Many functions in the mozjemalloc codebase like to return the opposite
boolean one would tend to expect. Pages_purge is one of them, and this
reverses the logic to match expectations.
Also make it static.
It turns out that not recycling some kinds of chunk can lead to the
recycle queue being starved in some scenarios. When that happens, we end
up mmap()ing new memory, but that turns out to be significantly slower.
So instead of not recycling huge chunks, we force-clean them, before
madvising so that the pages can still be reclaimed in case of memory
pressure.
--HG--
extra : rebase_source : 2dbd028daca92c9cd7c8079eb3dc5a0cfa06495b
This avoids MozStackWalk(), which has become unusably slow on Mac due to
changes in libunwind, and gets us back to decent speed.
The code for getting the frame pointer and stack end was copied from the Gecko
Profiler, which also uses FramePointerStackWalk() on Mac.
--HG--
extra : rebase_source : 58c32c2df8716c7c8123a4a8fb692182d066caca
Going through the system zone allocator for every call to realloc/free
on OSX is costly, because the zone allocator needs to first verify that
the allocations do belong to the allocator it invokes (which ends up
calling jemalloc's malloc_usable_size), which is unnecessary when we
expect the allocations to belong to jemalloc.
So, we export the malloc/realloc/free/etc. symbols from
libmozglue.dylib, such that libraries and programs linked against it
call directly into jemalloc instead of going through the system zone
allocator, effectively shortcutting the allocator verification.
The risk is that some things in Gecko try to realloc/free pointers it
got from system libraries, if those were allocated with a system zone
that is not jemalloc.
--HG--
extra : rebase_source : ee0b29e1275176f52e64f4648dfa7ce25d61292e
MOZ_REPLACE_MALLOC_LINKAGE was added back when there were problems with
getting weak references working properly for replace-malloc.
Versions of OSX < 10.6 needed flat namespace, but aren't supported
anymore.
Versions of Xcode < 4.5 required flat namespace + a dummy library in
order to produce proper weak references. There is virtually nobody still
building with such an ancient toolchain.
Keeping those around doesn't /really/ hurt, except recent versions of
Xcode don't expose dyldinfo in /usr/bin, used for the configure test.
Consequently, MOZ_REPLACE_MALLOC_LINKAGE ended up being set to use the
dummy library setup, which, by using flat namespace, now causes harm in
bug 1356701, causing bug 1378332.
--HG--
extra : rebase_source : e3edc1f2cf905943c33fafeb631f2f88fc87167e
At the same time:
- replace "(nullptr)" with "nullptr",
- replace "x == nullptr" with "!x",
- replace "x != nullptr" with "x".
--HG--
extra : rebase_source : 5bc5577d0de53244afae9ebadb84962f38306368
malloc_print_stats is designed as a end-of-process dump of malloc stats,
but it has limited value in Firefox itself. It has some usefulness as
something to invoke from a debugger while Firefox is running (after
manually setting opt_print_stats to true), but it also lacks information
about live allocations, which would make it more useful.
All in all, if we want some stats about jemalloc allocations in a more
useful way, we'd be better off augmenting the jemalloc_stats() API and
make the data available to e.g. about:memory, than keeping
malloc_print_stats.
--HG--
extra : rebase_source : f6fbc97dfddd52c8620e37e2e189ae0087a845b0
Going through the system zone allocator for every call to realloc/free
on OSX is costly, because the zone allocator needs to first verify that
the allocations do belong to the allocator it invokes (which ends up
calling jemalloc's malloc_usable_size), which is unnecessary when we
expect the allocations to belong to jemalloc.
So, we export the malloc/realloc/free/etc. symbols from
libmozglue.dylib, such that libraries and programs linked against it
calls directly into jemalloc instead of going through the system zone
allocator, effectively shortcutting the allocator verification.
The risk is that some things in Gecko try to realloc/free pointers it
got from system libraries, if those were allocated with a system zone
that is not jemalloc.
--HG--
extra : rebase_source : 45b9b98499760a7f946878d41d2fdaadb6dff4d6
Setting MALLOC_XMALLOC enables a runtime toggle that allows to make
allocations abort() on OOM instead of returning NULL. In other words,
it enables a toggle that allows to turn all allocations into infallible
allocations.
The toggle however still defaults to being disabled, which means even
when MALLOC_XMALLOC is defined when building mozjemalloc, there is no
change in behavior, unless a MALLOC_OPTIONS is set.
Even if this were useful to anyone (MALLOC_XMALLOC is only defined on
debug builds, limiting the usefulness), this is something
replace-malloc, in Firefox, is meant to be used for.
So let's remove this feature, and possibly add an equivalent
replace-malloc later if deemed necessary.
--HG--
extra : rebase_source : 1ccc893e9a8e842c3fa90e91f076a528df2f7dfe
Replace-malloc libraries, such as DMD, don't really need to care about
the details of implementing all the variants of aligned memory
allocation functions. Currently, by defining MOZ_REPLACE_ONLY_MEMALIGN
before including replace_malloc.h, they get predefined functions.
Instead of making that an opt-in at build time, we make the
replace-malloc initialization just fill the replace-malloc
malloc_table_t with implementations that rely on the replace_memalign
the library provides.
--HG--
extra : rebase_source : 0842a67d9bc27a9a86c33d14d98b9c25f39982fb
Until now, the malloc implementation functions would call the
replace-malloc functions if they exist, and fallback to the real
allocator in no such function exists. Instead of doing this, we now
fill the empty slots in the malloc_table_t with the real allocator
functions.
--HG--
extra : rebase_source : b54634f23188906939e4dc01fc5a3007de0f3f2c
We make replace_malloc_init_funcs called on all platforms and fill out a
malloc_table_t for the replace-malloc functions with what comes from
dlsym/GetProcAddress on Android/Windows, and from the dynamically linked
weak symbols replace_* on other platforms.
replace_malloc.h contains definitions of *_impl_t types for each of the
functions in the malloc_table_t, which is redundant with the
replace_*_impl_t types we were creating, so we remove those typedefs,
except for the two functions (init and get_bridge) that don't have such
a typedef. Those functions don't appear in malloc_table_t.
--HG--
extra : rebase_source : 3705a99ee07f63dbaa66973eef19ddab224e0911
We want, in a subsequent patch, to have replace_malloc_init_funcs be
called on all platforms (including those relying on the replace-malloc
library being loaded already) and perform more initialization.
To prepare for that, we move the non-platform-specific pieces out.
--HG--
extra : rebase_source : 239ed363ee168bf4f8a96e0a1ca52981cb941b71
All the _impl functions in replace-malloc.c are largely identical. This
replaces all of them with macro expansions.
--HG--
extra : rebase_source : 67a1809b0b0fc4645ea5041154fa3a6dcb6cce6b
This makes no significant difference in practice in the macro
expansions, but will help down the line.
--HG--
extra : rebase_source : 6d61c1f28c558321478d7e5f26390d27ae8ae3ac
MOZ_REPLACE_JEMALLOC was only defined when building jemalloc4 as a
replace-malloc library.
--HG--
extra : rebase_source : fa5c402da07fa96448c170b6db99629469691efe