This way, we don't have to specify a Parent when we're just interested in
Pipe-ing things together.
We could have called these inner classes Apply and left the Pipe implementation
alone, but it's probably better to call them Type and adjust the Pipe code.
* Sanity check on parameters to large buddy.
* Check commit occurs at page granularity
* Alter PAGE_SIZE usage
Using PAGE_SIZE as the minimum size of the CHUNK means that if this is
configured to 2MiB, then there is a gap between
MAX_SMALL_SIZECLASS_SIZE, and MIN_CHUNK_SIZE, and thus
we can't represent certain sizes,
* Rename to use Config, rather than StateHandle/Globals/Backend
* Make Backend a type on Config that contains the address space management implementation
* Make Ranges part of the Backend configuration, so we can reuse code for different ways of managing memory
* Pull the common chains of range definitions into separate files for reuse.
* Move PagemapEntry to CommonConfig
* Expose Pagemap through backend, so frontend doesn't see Pagemap directly
* Remove global Pal and use DefaultPal, where one is not pass explicitly.
Co-authored-by: David Chisnall <davidchisnall@users.noreply.github.com>
Co-authored-by: Nathaniel Filardo <105816689+nwf-msr@users.noreply.github.com>
This exposes a feature on Ranges to access ranges higher up the
stack of ranges. This could be useful for applying operations in the
middle of a pipeline like
object_range.ancestor<SpecialRange>().init(...);
This allows some initialisation to be added to the middle of pipeline
without breaking the current coding pattern.
It also allows for bypassing some ranges
object_range.ancestor<LargeObjectsRange>().alloc_chunk(...);
Neither are done in this commit, but both will occur in future commits.
Co-authored-by: Nathaniel Wesley Filardo <nfilardo@microsoft.com>
This commit changes the codegen for error messages for failed memcpys.
This no longer generates a stack frame and correctly tail calls the
error messages generator.
It also turns the error messages on in Release builds. This will lead
to better adoption experience.
The ranges are naturally put together with pipes. This
commit does some template magic to make the code more
readable. There should be now functional changes with
this change.
Some secure allocators check that the C++ supplied size is correct
relative to the meta-data. This adds a check to the secure version of
snmalloc to do that.
Currently a failing debug_check_empty does not provide any information.
This change allows it to print the size of the one of the allocations
that has not been freed.
If this test fails to allocate memory, that should not cause the test to
fail. The 'abort' was added previously to confirm a infrequent failure
was caused by out-of-memory causing the test to assign to nullptr.
This was confirmed in a CI run, and now the test can be made to ignore
allocation failure.
The Metadata range should not be shared with the object range. This
change ensures that their are separate requests to the Pal for meta-data
and object data ranges. The requests are never combined, and thus
memory cannot flow from being used in malloc to later be used in meta-
data.
Build three levels of checking
- None
- Checks memcpy only
- Checks (full)
Currently you can build checks without enabling the memcpy protection.
This PR fixes that.
* Adding a refilling heuristic
The large buddy allocator requests memory from its parent range. The
request size was a fixed large request. This was sufficiently large, so
that contention was not a problem.
This change makes it initially smaller, and gradually growing so that
contention is still not a problem, but for small work loads it requests
less memory.
* Remove special case for OE as no longer required.
This refactoring was provided by David. Previously if a backend
provided a capptr_domesticate function with the wrong type it would be
silently ignored. This change requires backends to explicitly opt in
to domestication via a new Backend::Option and ensures the compiler
will loudly complain if there is a mismatch.
Clang 15 doesn't build the release builds with CHECK_CLIENT enabled
because they are using `SNMALLOC_ASSERT` and so the values that we're
collecting to check are never actually checked. This is probably a bug
- if we're turning on the checks, I imagine it's because we want them.
During creation of large allocation the code was not setting the
consolidation bit. This meant that Windows would crash for certain patterns for large
allocations.
* Move PageMap interface into pagemap.h and rename to BasicPagemap.
Refactoring suggested by David. This allows custom backends to reuse
or extend the BasicPagemap. It has template parameters for the PAL,
concrete page map and page map entry types as well as the Backend (so
that it can be friends). BackendAllocator provides an exmple page map
entry type.
This doesn't give any extra flexibility: the range itself can be either
a stateless class, a class with no per-instance state that stores all of
static fields, or a class with stateful instances. It did add a
requirement that every range implementation added an indirection layer.
See src/snmalloc/README.md for an explanation of the layers.
Some other cleanups on the way:
Fine-grained stats support is now gone.
It's been broken for two years, it depends on iostream (which then
causes linker failures with libstdc++) and it's collecting the wrong
stats for the new design. After discussion with @mjp41, it's better to
remove it and introduce new stats support later, rather than keep broken
code in the main branch.
Tracing was controlled with a preprocessor macro, now there's also a
CMake option.
MetaCommon is now gone. The back end must provide a SlabMetadata,
which must be a subtype of MetaSlab (i.e. MetaSlab or a subclass of
MetaSlab). It may add additional state here.
The MetaEntry is now templated on the concrete subclass of MetaSlab that
the back-end uses. The MetaEntry still stores this as a `uintptr_t` to
allow easier toggling of the boundary bit but the interfaces are all in
terms of stable types now.
Also some tidying of names (SharedStateHandle is now called Backend).
In a follow-on PR, we can then remove the chunk field from the
BackendMetadata in the non-CHERI back end and allow back ends that don't
require extra state to use MetaSlab directly.
Other cleanups:
- Remove backend/metatypes, define the types that the front end expects
in mem/metaslab. The back end may extend them but these types define
part of the contract between the front and back ends.
- Remove FrontendMetaEntry and fold its methods into MetaEntry.
- For example purposes, the default back end now extends MetaEntry.
This also ensures that nothing in the front end depends on the
specific type of MetaEntry.
- Some things now have more sensible names.
The meta entry now operates in one of three modes:
- When owned by the front end, it stores a pointer to a remote, a
pointer to some MetaSlab subclass, and a sizeclass.
- When owned by the back end, it stores two back-end defined values
that must fit in the bits of `uintptr_t` that are not reserved for
the MetaEntry itself.
- When not owned by either, it can be queried as if owned by the front
end.
The red-black tree has been refactored to allow the holder to be a
wrapper type, removing all of the Holder* and Holder& uses and treating
it uniformly as a value type that can be used to access the contents.
The chunk field is fone from the slab medatada.
This will need to be added back in the CHERI back ends, but it's a
back-end policy. The back end can choose to use it or not, depending on
whether it can safely convert between an Alloc-bounded pointer and a
Chunk-bounded pointer.
The term 'metaslab' originated in snmalloc 1 to mean a slab of slabs.
In the snmalloc2 branch it was repurposed to mean metadata about a
slab. To make this clearer, all uses of metaslab are now gone and have
been renamed to slab metadata. The frontend metadata classes are all
prefixed Frontend and some extra invariants are checked with
`static_assert`.
On FreeBSD (possibly elsewhere) the normal versions of these go via an
indirection layer because they are pthread cancellation points. This
indirection layer does not work correctly if pthreads can't allocate
memory and so we can't get debug output until malloc is working, at
least a little bit.
With this version, we can call the __sys_ variants, which skip any libc
/ libthr interposition.
CheriBSD 00d71bd4d11af448871d196f987c2ded474f3039 changes
"CHERI_PERM_CHERIABI_VMMAP" to be spelled "CHERI_PERM_SW_VMEM" and deprecated
the old form. Follow along with fallback so we can use older CheriBSDs.
David points out that the downcasts I had introduced were UB. Instead, go back
to passing MetaEntry-s around and make MetaslabMetaEntry just a namespace of
static methods.
This partially reverts 7940fee00c
Otherwise these won't get updated until the small buddy allocator hands them off
to the large buddy allocator (when they morph into being rbtree nodes) and so
the frontend might get confused in the interim (including risk of UAF on
double-free).
For 32bit external pointer it was performing a divide by size, and for
things not managed by snmalloc this was causing a crash. This checks
for zero, and gives the start of the address range as the start of the
object.