* Make a conditional range
This range allows for a contained range to be disabled at runtime.
This allows for thread local caching to be disabled if the initial fixed
size heap is below a threshold.
There was a mis-compilation in a Verona configuration that lead to
two instances of key_global existing. This change moves it inside
a struct that seems to fix the issue.
The rest of the changes are limiting the use of key_global as both
RemoteCache and RemoteAllocator must use the same configuration,
so there is no need to take the key_global as a parameter.
The buddy allocator doesn't need to look at sizes above the current
highest size. This commit tracks the highest block that is stored in the
buddy allocator.
All the checks and mitigations have been placed under feature flags.
These can be controlled by defining
SNMALLOC_CHECK_CLIENT_MITIGATIONS
This can take a term that represents the mitigations that should be enabled.
E.g.
-DSNMALLOC_CHECK_CLIENT_MITIGATIONS=nochecks+random_pagemap
The CMake uses this to build numerous versions of the LD_PRELOAD library and
tests to allow individual features to be benchmarked.
Co-authored-by: Nathaniel Wesley Filardo <nfilardo@microsoft.com>
When using dlopen with RTL_DEEPBIND the LD_PRELOAD used
by the majority of allocators does not work as both libc
and snmallocs allocators can be called by various dso.
This patch uses GLIBC's __malloc_hook to override the allocator
as well as LD_PRELOAD. This means that all libraries will
call snmalloc when performing allocation.
* Move Morello CI to track default release
- Log some details of the build environment
- Remove workarounds overcome by events
* Morello CI: parameterize run queue and boot env
* Morello CI to run as a non-root user
For reasons unrelated to snmalloc, it's become more convenient to engage
in a little white lie, as it were, that the CI jobs are not `root` on
the worker nodes. So I'm testing changes on the cluster orchestration
goo to run the github runner as a non-root user. However, much as with
GitHub's own runners, the runner user is in the `wheel`, and `root` will
have no password, so we can still `su` up to `root` when needed.
Of course, when we are already root, we can `su` to anyone we like,
including `root`, so these changes are compatible with both the old and
new world order and have been tested with both.
This uses VirtualBox instead of xhyve. It might be slower, but should
be more reliable.
Tests run on FreeBSD, NetBSD, and OpenBSD. Only the FreeBSD ones are
passing at the moment, the others will keep running but aren't added as
dependencies for the action used to guard commits.
Compiler versions do not imply standard library versions, and even where
the compiler and standard library versions were matched, this check was
wrong.
* Implement tracking full slabs and large allocations
This adds an additional SeqSet that is used to track all the fully
used slabs and large allocations. This gives more chances to
detect memory leaks, and additionally catch some more UAF failures
where the object is not recycled.
* Make slabmeta track a slab interior pointer
Use the head of the free list builder to track an interior pointer to
the slab. This is unused unless the list contains something.
Hence, we can use this to represent an interior pointer to the slab and
report more accurate leaks.
* clangformat
* clangtidy
* clangtidy
* Clang tidy again.
* Fixing provenance.
* Clangformat
* Clang tidy.
* Add assert for sanity
* Make reinterpret_cast more descriptive.
Add an operation to get a tag free pointer from an address_t, and use it
* Clangformat
* CR
* Fix calculation of number of allocations.
* Fix calculation of number of allocations.
* Fix test
As with CTest, but without the full machinery thereof. This facilitates
package builders to use the usual build targets (all, install) without
needing to build the test programs if they're just going to get dropped
on the floor.
This will serve as the granularity with which we store authority
pointers in the (forthcoming) authmap, so 4K is almost surely too small.
16M is, admittedly, chosen out of a hat.
To date, we've had exactly one kind of Pagemap and it held exactly one
type of thing, a descendant of class MetaEntryBase.
PagemapRegisterRange tacitly assumed that the Pagemap (adapter) it
interacted would therefore store entries that could have .set_boundary()
called on them. But in general there's no requirement that this be
true; Pagemaps are generic data structures.
To enable reuse of the PagemapRegisterRange machinery more generally,
change the type of Pagemap::register_range() to take a pointer (rather
than an address) and move the MetaEntryBase-specific functionality to
the backend_helpers/pagemap adapter.
Instead, take a template parameter for the no-args init() method, so
that randomization can be disabled on StrictProvenance architectures
(CHERI), where we don't expect it to be useful, even when snmalloc is
being built to be otherwise paranoid.
Catch callsites up.
More directly ensure that a "basic" pagemap's type matches its
"concrete" pagemap parameter's entry type. Absent this check, getting
this wrong won't be detected until even further along in template code
generation (when considering a method that sees the mismatch).