Граф коммитов

2195 Коммитов

Автор SHA1 Сообщение Дата
Nobuyoshi Nakada a05b261464 Always use the longer version of `TRY_WITH_GC` 2022-09-28 23:51:38 +09:00
Aaron Patterson 06abfa5be6
Revert this until we can figure out WB issues or remove shapes from GC
Revert "* expand tabs. [ci skip]"

This reverts commit 830b5b5c35.

Revert "This commit implements the Object Shapes technique in CRuby."

This reverts commit 9ddfd2ca00.
2022-09-26 16:10:11 -07:00
git 830b5b5c35 * expand tabs. [ci skip]
Tabs were expanded because the file did not have any tab indentation in unedited lines.
Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
2022-09-27 01:21:58 +09:00
Jemma Issroff 9ddfd2ca00 This commit implements the Object Shapes technique in CRuby.
Object Shapes is used for accessing instance variables and representing the
"frozenness" of objects.  Object instances have a "shape" and the shape
represents some attributes of the object (currently which instance variables are
set and the "frozenness").  Shapes form a tree data structure, and when a new
instance variable is set on an object, that object "transitions" to a new shape
in the shape tree.  Each shape has an ID that is used for caching. The shape
structure is independent of class, so objects of different types can have the
same shape.

For example:

```ruby
class Foo
  def initialize
    # Starts with shape id 0
    @a = 1 # transitions to shape id 1
    @b = 1 # transitions to shape id 2
  end
end

class Bar
  def initialize
    # Starts with shape id 0
    @a = 1 # transitions to shape id 1
    @b = 1 # transitions to shape id 2
  end
end

foo = Foo.new # `foo` has shape id 2
bar = Bar.new # `bar` has shape id 2
```

Both `foo` and `bar` instances have the same shape because they both set
instance variables of the same name in the same order.

This technique can help to improve inline cache hits as well as generate more
efficient machine code in JIT compilers.

This commit also adds some methods for debugging shapes on objects.  See
`RubyVM::Shape` for more details.

For more context on Object Shapes, see [Feature: #18776]

Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Eileen M. Uchitelle <eileencodes@gmail.com>
Co-Authored-By: John Hawthorn <john@hawthorn.email>
2022-09-26 09:21:30 -07:00
Samuel Williams 22af2e9084 Rework vm_core to use `int first_lineno` struct member. 2022-09-26 00:41:16 +13:00
Nobuyoshi Nakada ff07e5c264
Skip poisoned regions
Poisoned regions cannot be accessed without unpoisoning outside gc.c.
Specifically, debug.gem is terminated by AddressSanitizer.

```
SUMMARY: AddressSanitizer: use-after-poison iseq_collector.c:39 in iseq_i
```
2022-08-09 20:11:48 +09:00
Peter Zhu 229cf263df Lock the VM for rb_gc_writebarrier_unprotect
When using Ractors, rb_gc_writebarrier_unprotect requries a VM lock
since it modifies the bitmaps.
2022-07-28 10:02:12 -04:00
Peter Zhu 1c16645216 Make array slices views rather than copies
Before this commit, if the slice fits in VWA, it would make a copy
rather than a view. This is slower as it requires a memcpy of the
contents.
2022-07-28 10:02:12 -04:00
Peter Zhu 2375afb8d6 Refactor gc_ref_update_array 2022-07-28 10:02:12 -04:00
Nobuyoshi Nakada 5d5c1d0fbd
Suppress use-after-free warning by gcc-12 2022-07-28 09:06:42 +09:00
Nobuyoshi Nakada f42230ff22
Adjust styles [ci skip] 2022-07-27 18:42:27 +09:00
git 3b1ed03d8c * expand tabs. [ci skip]
Tabs were expanded because the file did not have any tab indentation in unedited lines.
Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
2022-07-27 01:40:03 +09:00
Jemma Issroff 36d0c71ace
Refactored poisoning and unpoisoning freelist to simpler API 2022-07-26 09:39:31 -07:00
Peter Zhu efb91ff19b Rename rb_ary_tmp_new to rb_ary_hidden_new
rb_ary_tmp_new suggests that the array is temporary in some way, but
that's not true, it just creates an array that's hidden and not on the
transient heap. This commit renames it to rb_ary_hidden_new.
2022-07-26 09:12:09 -04:00
Nobuyoshi Nakada b30b727c24
Fix format specifier
`uintptr_t` is not always `unsigned long`, but can be casted to void
pointer safely.
2022-07-25 09:18:36 +09:00
Takashi Kokubun 5b21e94beb Expand tabs [ci skip]
[Misc #18891]
2022-07-21 09:42:04 -07:00
Peter Zhu cdbb9b8555 [Bug #18929] Fix heap creation thrashing in GC
Before this commit, if we don't have enough slots after sweeping but
had pages on the tomb heap, then the GC would frequently allocate and
deallocate pages. This is because after sweeping it would set
allocatable pages (since there were not enough slots) but free the
pages on the tomb heap.

This commit reuses pages on the tomb heap if there's not enough slots
after sweeping.
2022-07-21 10:46:32 -04:00
Peter Zhu 1c9acb6bb1 Refactor macros of array.c
Move some macros in array.c to internal/array.h so that other files
can also access these macros.
2022-07-21 09:02:45 -04:00
Daniel Colson 32e406d6d3 Ensure _id2ref finds symbols with the correct type
Prior to this commit it was possible to call `ObjectSpace._id2ref` with
an offset static symbol object_id and get back a new, incorrectly tagged
symbol:

```
> sensible_sym = ObjectSpace._id2ref(:a.object_id)
=> :a
> nonsense_sym = ObjectSpace._id2ref(:a.object_id + 40)
=> :a
> sensible_sym == nonsense_sym
=> false
```

`nonsense_sym` ends up tagged with `RUBY_ID_INSTANCE` instead of
`RB_ID_LOCAL`. That means we can do silly things like:

```
> foo = Object.new
> foo.instance_variable_set(:a, 123)
(irb):2:in `instance_variable_set': `a' is not allowed as an instance variable name (NameError)
> foo.instance_variable_set(ObjectSpace._id2ref(:a.object_id + 40), 123)
=> 123
> foo.instance_variables
=> [:a]
```

This was happening because `get_id_entry` ignores the tag bits when
looking up the symbol. So `rb_id2str(symid)` would return a value and
then we'd continue on with the nonsense `symid`.

This commit prevents the situation by checking that the `symid` actually
matches what we get back from `get_id_entry`. Now we get a `RangeError`
for the nonsense id:

```
> ObjectSpace._id2ref(:a.object_id)
=> :a
> ObjectSpace._id2ref(:a.object_id + 40)
(irb):1:in `_id2ref': 0x000000000013f408 is not symbol id value (RangeError)
```

Co-authored-by: John Hawthorn <jhawthorn@github.com>
2022-07-20 10:38:44 -07:00
Peter Zhu 86d061294d [Bug #18928] Fix crash in WeakMap
In wmap_live_p, if is_pointer_to_heap returns false, then the page is
either in the tomb or has already been freed, so the object is dead. In
this case, wmap_live_p should return false.
2022-07-20 08:40:31 -04:00
Nobuyoshi Nakada 472740de41
Fix free objects count condition
Free objects have `T_NONE` as the builtin type.  A pointer to a valid
array element will never be `NULL`.
2022-07-20 17:39:54 +09:00
Peter Zhu 7424ea184f Implement Objects on VWA
This commit implements Objects on Variable Width Allocation. This allows
Objects with more ivars to be embedded (i.e. contents directly follow the
object header) which improves performance through better cache locality.
2022-07-15 09:21:07 -04:00
Matt Valentine-House 214ed4cbc6 [Feature #18901] Support size pool movement for Arrays
This commit enables Arrays to move between size pools during compaction.
This can occur if the array is mutated such that it would fit in a
different size pool when embedded.

The move is carried out in two stages:

1. The RVALUE is moved to a destination heap during object movement
   phase of compaction
2. The array data is re-embedded and the original buffer free'd if
   required. This happens during the update references step
2022-07-12 08:50:33 -04:00
Matt Valentine-House a6dd859aff Add expand_heap option to GC.verify_compaction_references
In order to reliably test compaction we need to be able to move objects
between size pools.

In order for this to happen there must be pages in a size pool into
which we can allocate.

The existing implementation of `double_heap` only doubled the existing
number of pages in the heap, so if a size pool had a low number of pages
(or 0) it's not guaranteed that enough space will be created to move
objects into that size pool.

This commit deprecates the `double_heap` option and replaces it with
`expand_heap` instead.

expand heap will expand each heap by enough pages to hold a number of
slots defined by `GC_HEAP_INIT_SLOTS` or by `heap->total_pags` whichever
is larger.

If both `double_heap` and `expand_heap` are present, a deprecation
warning will be shown for `double_heap` and the `expand_heap` behaviour
will take precedence

Given that this is an API intended for debugging and testing GC
compaction I'm not concerned about the extra memory usage or time taken
to create the pages. However, for completeness:

Running the following `test.rb` and using `time` on my Macbook Pro shows
the following memory usage and time impact:

pp "RSS (kb): #{`ps -o rss #{Process.pid}`.lines.last.to_i}"
GC.verify_compaction_references(double_heap: true, toward: :empty)
pp "RSS (kb): #{`ps -o rss #{Process.pid}`.lines.last.to_i}"

❯ time make run
./miniruby -I./lib -I. -I.ext/common  -r./arm64-darwin21-fake  ./test.rb
"RSS (kb): 24000"
<internal:gc>:251: warning: double_heap is deprecated and will be removed
"RSS (kb): 25232"

________________________________________________________
Executed in  124.37 millis    fish           external
   usr time   82.22 millis    0.09 millis   82.12 millis
   sys time   28.76 millis    2.61 millis   26.15 millis

❯ time make run
./miniruby -I./lib -I. -I.ext/common  -r./arm64-darwin21-fake  ./test.rb
"RSS (kb): 24000"
"RSS (kb): 49040"

________________________________________________________
Executed in  150.13 millis    fish           external
   usr time  103.32 millis    0.10 millis  103.22 millis
   sys time   35.73 millis    2.59 millis   33.14 millis
2022-07-11 09:00:03 -04:00
Nobuyoshi Nakada ec09ba58d1
Extract `atomic_inc_wraparound` function 2022-07-10 17:56:36 +09:00
Nobuyoshi Nakada b1b8172328
Add `asan_unpoisoning_object` to execute the block with unpoisoning 2022-07-10 13:11:07 +09:00
Nobuyoshi Nakada ec303e49af
Split `rb_raw_obj_info` 2022-07-10 13:11:07 +09:00
Nobuyoshi Nakada 233054a609
Cycle `obj_info_buffers_index` atomically 2022-07-10 13:11:06 +09:00
Nobuyoshi Nakada a006dcb73f
`APPEND_S` for no conversion formats 2022-07-10 13:07:40 +09:00
Nobuyoshi Nakada 2bf0313561
Rewrite `APPENDF` using variadic arguments 2022-07-10 13:03:22 +09:00
Nobuyoshi Nakada 51025a9013
Use `size_t` for `rb_raw_obj_info` 2022-07-10 13:03:22 +09:00
Nobuyoshi Nakada fbe3651466
Use `asan_unpoison_object_temporary` 2022-07-10 13:03:22 +09:00
Nobuyoshi Nakada b16f44ad4f
Get rid of static buffer in `obj_info` 2022-07-10 13:03:21 +09:00
Nobuyoshi Nakada 61c7ae4d27 Gather heap page size conditions combination
When similar combination of conditions are separated in two places, it
is harder to make sure the conditional blocks match each other,
2022-07-07 22:39:59 +09:00
Peter Zhu f36859869f Improve error message for segv in read_barrier_handler
If the page_body is a null pointer, then read_barrier_handler will
crash with an unrelated message. This commit improves the error message.

Before:

test.rb:1: [BUG] Couldn't unprotect page 0x0000000000000000, errno: Cannot allocate memory

After:

test.rb:1: [BUG] read_barrier_handler: segmentation fault at 0x14
2022-07-07 09:39:28 -04:00
Peter Zhu d6c98626da Fix crash in compaction due to unlocked page
The page of src could be partially compacted, so it may contain
T_MOVED. Sweeping a page may read objects on this page, so we
need to lock the page.
2022-07-07 09:39:28 -04:00
Peter Zhu d7c5a6d49b Fix typo in gc_compact_move
The page we're sweeping is on the destination heap `dheap`, not the
source heap `heap`.
2022-07-07 09:39:28 -04:00
Nobuyoshi Nakada f681f9ae24
Adjust indents [ci skip] 2022-07-06 00:45:06 +09:00
Nobuyoshi Nakada cab10a2c50
Extract `protect_page_body` to fix mismatched braces 2022-06-18 10:20:46 +09:00
KJ Tsanaktsidis 05ffc037ad Disable Mach exception handlers when read barriers in place
The GC compaction mechanism implements a kind of read barrier by marking
some (OS) pages as unreadable, and installing a SIGBUS/SIGSEGV handler
to detect when they're accessed and invalidate an attempt to move the
object.

Unfortunately, when a debugger is attached to the Ruby interpreter on
Mac OS, the debugger will trap the EXC_BAD_ACCES mach exception before
the runtime can transform that into a SIGBUS signal and dispatch it.
Thus, execution gets stuck; any attempt to continue from the debugger
re-executes the line that caused the exception and no forward progress
can be made.

This makes it impossible to debug either the Ruby interpreter or a C
extension whilst compaction is in use.

To fix this, we disable the EXC_BAD_ACCESS handler when installing the
SIGBUS/SIGSEGV handlers, and re-enable them once the compaction is done.
The debugger will still trap on the attempt to read the bad page, but it
will be trapping the SIGBUS signal, rather than the EXC_BAD_ACCESS mach
exception. It's possible to continue from this in the debugger, which
invokes the signal handler and allows forward progress to be made.
2022-06-18 00:10:16 +09:00
Nobuyoshi Nakada 2c19086323
Suppress code unused unless GC_CAN_COMPILE_COMPACTION 2022-06-17 10:47:16 +09:00
Peter Zhu 79eaaf2d0b Include runtime checks for compaction support
Commit 0c36ba5319 changed GC compaction
methods to not be implemented when not supported. However, that commit
only does compile time checks (which currently only checks for WASM),
but there are additional compaction support checks during run time.

This commit changes it so that GC compaction methods aren't defined
during run time if the platform does not support GC compaction.

[Bug #18829]
2022-06-16 10:18:46 -04:00
Peter Zhu 52d42e7023 Rename GC_COMPACTION_SUPPORTED
Naming this macro GC_COMPACTION_SUPPORTED is misleading because it
only checks whether compaction is supported at compile time.

[Bug #18829]
2022-06-16 10:18:46 -04:00
Takashi Kokubun 1162523bae
Remove MJIT worker thread (#6006)
[Misc #18830]
2022-06-15 09:40:54 -07:00
Matt Valentine-House 56cc3e99b6 Move String RVALUES between pools
And re-embed any strings that can now fit inside the slot they've been
moved to
2022-06-13 10:11:27 -07:00
Peter Zhu 8d57336360 Fix major GC thrashing
Only growth heaps are allowed to start major GCs. Before this patch,
growth heaps are defined as size pools that freed more slots than had
empty slots (i.e. there were more dead objects that empty space).

But if the size pool is relatively stable and tightly packed with mostly
old objects and has allocatable pages, then it would be incorrectly
classified as a growth heap and trigger major GC. But since it's stable,
it would not use any of the allocatable pages and forever be classified
as a growth heap, causing major GC thrashing. This commit changes the
definition of growth heap to require that the size pool to have no
allocatable pages.
2022-06-08 12:09:19 -04:00
Peter Zhu d1b6c8a1cc Fix compilation error when USE_RVARGC=0
force_major_gc_count was not defined when USE_RVARGC=0.
2022-06-08 11:25:31 -04:00
Peter Zhu fafe68185c Add key force_major_gc_count to GC.stat_heap
force_major_gc_count is the number of times the size pool forced major
GC to run.
2022-06-08 10:03:00 -04:00
Peter Zhu c4bf24ee46 Remove while loop over heap_prepare
Having a while loop over `heap_prepare` makes the GC logic difficult to
understand (it is difficult to understand when and why `heap_prepare`
yields a free page). It is also a source of bugs and can cause an infinite
loop if `heap_page` never yields a free page.
2022-06-07 09:56:20 -04:00
Nobuyoshi Nakada af90433876
Typedef built-in function types 2022-06-02 16:05:35 +09:00
Nobuyoshi Nakada b96a3a6fd2
Move `GC.verify_compaction_references` [Bug #18779]
Define `GC.verify_compaction_references` as a built-in ruby method,
according to GC compaction support via `GC::OPTS`.
2022-06-02 15:32:00 +09:00
Nobuyoshi Nakada dfc8060756
Adjust indent and nesting [ci skip] 2022-06-02 14:34:48 +09:00
Mike Dalessio 0c36ba5319 Define unsupported GC compaction methods as rb_f_notimplement
Fixes [Bug #18779]

Define the following methods as `rb_f_notimplement` on unsupported
platforms:

- GC.compact
- GC.auto_compact
- GC.auto_compact=
- GC.latest_compact_info
- GC.verify_compaction_references

This change allows users to call `GC.respond_to?(:compact)` to
properly test for compaction support. Previously, it was necessary to
invoke `GC.compact` or `GC.verify_compaction_references` and check if
those methods raised `NotImplementedError` to determine if compaction
was supported.

This follows the precedent set for other platform-specific
methods. For example, in `process.c` for methods such as
`Process.fork`, `Process.setpgid`, and `Process.getpriority`.
2022-05-24 09:40:03 -07:00
Mike Dalessio 0de1495f35 Move compaction-related methods into gc.c
These methods are removed from gc.rb and added to gc.c:

- GC.compact
- GC.auto_compact
- GC.auto_compact=
- GC.latest_compact_info
- GC.verify_compaction_references

This is a prefactor to allow setting these methods to
`rb_f_notimplement` in a followup commit.
2022-05-24 09:40:03 -07:00
Matt Valentine-House 708e839dee Fix compiler warning when USE_RVARGC=0 2022-05-13 16:26:41 -04:00
Kaíque Kandy Koga a85cdb5a6e
Write have instead of have have [ci skip 2022-05-10 13:07:16 +09:00
Peter Zhu 85479b34f7 Don't allocate new page on finish sweeping
We don't need to allocate a new page in gc_sweep_finish_size_pool.
It can be allocated when needed.
2022-05-09 08:45:24 -04:00
Peter Zhu e28e9c63c6 Fix heap_extend_pages when total_slots is 0
Some size pools may not have any pages/slots, so total_slots is 0. This
causes a divide-by-zero in the calculation. This commit adds a special
case to catch the case when total_slots is 0 and returns the number of
pages for heap_init_slots.
2022-05-09 08:45:24 -04:00
Peter Zhu f7d480378a Grow size pools with no or few slots
If the size pool has no or few pages/slots, then min_free_slots will
be a very small number (or even 0). Then the heap won't be eligible to
grow, causing GC thrashing or infinite loops.
2022-05-09 08:45:24 -04:00
Peter Zhu b3f3cb0c38 Call gc_sweep_finish_size_pool on size pools with no pages
Size pools with no pages won't be swept so gc_sweep_finish_size_pool
will never be called on it, but gc_sweep_finish_size_pool must be called
to grow the size pool.
2022-05-09 08:45:24 -04:00
Peter Zhu 033e58cf2c Fix gc_page_sweep when last bitmap plane is not used
Depending on alignment, the last bitmap plane may not used. Then it will
appear as if all of the objects on that plane is unmarked, which will
cause a buffer overrun when we try to free the object. This commit
changes the loop to calculate the number of planes used
(bitmap_plane_count).
2022-05-09 08:45:24 -04:00
Alan Wu cae85c528c Mark RCLASS_INCLUDER
Since 4d8f76286b, we need to dereference
the includer field on iclasses, so we need to mark it to make sure
it's alive.

Sometimes during compaction we crash because the field is dangling,
though I have a hard time constructing such a situation. See
http://ci.rvm.jp/results/trunk@ruby-iga/3947725
2022-05-05 17:37:07 -04:00
Jemma Issroff d7df8c6964 Unpoison freelist when iterating over it in gc_sweep_page 2022-05-04 12:49:15 -07:00
Peter Zhu bff31b3208 Remove unneeded cast
`start` is of type uintptr_t so it does not need to be casted to VALUE.
2022-05-04 09:24:03 -04:00
Alan Wu 379f5a6e8e Update reference for RCLASS_INCLUDER during compaction
We didn't update the includer field during compaction so it could become
a dangling pointer after compaction. It's only recently that we started
to dereference the field, and we were only comparing the pointer before
then, so the omission only recently started to cause crashes.

By instrumenting object.c:833 with `rp(includer);`, you can see the
includer field become `T_NONE` with the following script:

```ruby
mod = Module.new do
  protected def foo = 1
end

klass = Class.new do
  include Module.new
  def run
    foo
  end
end

klass.include(mod)

GC.verify_compaction_references(double_heap: true, toward: :empty)

klass.new.run
```

I found a crash in a private application that this patch fixes, but
wasn't able to develop a small reproducer. Hence the above demo that
requires instrumentation.
2022-05-03 16:48:46 -04:00
Nobuyoshi Nakada df1594e4b5
Parenthize macro arguments 2022-04-13 22:55:20 +09:00
Kazuhiro NISHIYAMA 48ffa28044
Fix a typo [ci skip] 2022-04-12 19:14:39 +09:00
S-H-GAMELINKS 5b467400d2 [DOC]Some link prefix replace 2022-04-09 17:43:46 +09:00
Nobuyoshi Nakada 5af507f527
Update `heap_pages_deferred_final` atomically 2022-04-07 12:19:18 +09:00
Eric Wong a19b2d59fc ruby_gc_set_params: update malloc_limit when env is set
During VM startup, rb_objspace_alloc sets malloc_limit
(objspace->malloc_params.limit) before ruby_gc_set_params is called, thus
nullifying the effect of RUBY_GC_MALLOC_LIMIT before the initial GC run.

The call sequence is as follows:

  main.c::main()
    ruby_init
      ruby_setup
        Init_BareVM
          rb_objspace_alloc // malloc_limit = gc_params.malloc_limit_min;
    ruby_options
      ruby_process_options
        process_options
          ruby_gc_set_params // RUBY_GC_MALLOC_LIMIT => gc_params.malloc_limit_min

With ruby_gc_set_params setting malloc_limit, RUBY_GC_MALLOC_LIMIT
affects the process sooner.

[ruby-core:107170]
2022-04-04 21:46:02 +00:00
Peter Zhu ea9c09a92c Disable mmap on WASM
WASM does not have proper support for mmap.
2022-04-04 09:27:14 -04:00
Peter Zhu c482ee4025 Make heap page sizes 64KiB by default
Commit dde164e968 decoupled incremental
marking from page sizes. This commit changes Ruby heap page sizes to
64KiB. Doing so will have several benefits:

1. We can use compaction on systems with 64KiB system page sizes (e.g.
   PowerPC).
2. Larger page sizes will allow Variable Width Allocation to increase
   slot sizes and embed larger objects.
3. Since commit 002fa28599, macOS has 64
   KiB pages. Making page sizes 64 KiB will bring these systems to
   parity.

I have attached some bechmark results below.

Discourse:
    On Discourse, we saw much better p99 performance (e.g. for "categories"
    it went from 214ms on master to 134ms on branch, for "home" it went
    from 265ms to 251ms). We don’t see much change in p60, p75, and p90
    performance. We also see a slight decrease in memory usage by 1.04x.

    Branch RSS: 354.9MB
    Master RSS: 368.2MB

railsbench:
    On rails bench, we don’t see a big change in RPS or p99
    performance. We don’t see a big difference in memory usage.

    Branch RPS: 826.27
    Master RPS: 824.85

    Branch p99: 1.67
    Master p99: 1.72

    Branch RSS: 88.72MB
    Master RSS: 88.48MB

liquid:
    We don’t see a significant change in liquid performance.

    Branch parse & render: 28.653 I/s
    Master parse & render: 28.563 i/s
2022-04-04 09:27:14 -04:00
Matt Valentine-House 651b832c1b extract magic number from gc_sweep_step 2022-04-01 10:52:18 -04:00
Peter Zhu fe21b7794a Use mmap for heap page allocation only
Currently, rb_aligned_malloc uses mmap if Ruby heap pages can be
allocated through mmap (when system heap page size <= Ruby heap page
size). If Ruby heap page sizes is increased to 64KiB, then mmap will
be used on systems with 64KiB system page sizes. However, the transient
heap also uses rb_aligned_malloc and requires 32KiB alignment. This
would break in the current implementation since it would allocate sizes
through mmap that is not a multiple of the system page size.

This commit adds heap_page_body_allocate which will use mmap when
possible and changes rb_aligned_malloc to not use mmap (and only
use posix_memalign).
2022-04-01 10:27:18 -04:00
Matt Valentine-House d8352ff3ac [Feature #18619] remove FL_FROM_FREELIST 2022-04-01 08:45:52 -04:00
Matt Valentine-House c26a85fc96 [Feature #18619] Remove redundant compaction path 2022-04-01 08:45:52 -04:00
Matt Valentine-House 76572e5a7f [Feature #18619] Reverse the order of compaction movement
This commit changes the way compaction moves objects and sweeps pages in
order to better facilitate object movement between size pools.

Previously we would move the scan cursor first until we found an empty
slot and then we'd decrement the compact cursor until we found something
to move into that slot. We would sweep the page that contained the scan
cursor before trying to fill it

In this algorithm we first move the compact cursor down until we find an
object to move - We then take a free page from the desired destination
heap (always the same heap in this current iteration of the code).

If there is no free page we sweep the page at the sweeping_page cursor,
add it to the free pages, and advance the cursor to the next page, and
try again.

We sweep one page from each size pool in this way, and then repeat that
process until all the size pools are compacted (all the cursors have
met), and then we update references and sweep the rest of the heap.
2022-04-01 08:45:52 -04:00
Matt Valentine-House bb037f6d86 Remove hard-coded swept slots threshold 2022-03-31 14:39:59 -04:00
Peter Zhu dde164e968 Decouple incremental marking step from page sizes
Currently, the number of incremental marking steps is calculated based
on the number of pooled pages available. This means that if we make Ruby
heap pages larger, it would run fewer incremental marking steps (which
would mean each incremental marking step takes longer).

This commit changes incremental marking to run after every
INCREMENTAL_MARK_STEP_ALLOCATIONS number of allocations. This means that
the behaviour of incremental marking remains the same regardless of the
Ruby heap page size.

I've benchmarked against discourse benchmarks and did not get a
significant change in response times beyond the margin of error. This is
expected as this new incremental marking algorithm behaves very
similarly to the previous one.
2022-03-30 09:33:17 -04:00
Nobuyoshi Nakada 42a0bed351
Prefix ccan headers (#4568)
* Prefixed ccan headers

* Remove unprefixed names in ccan/build_assert

* Remove unprefixed names in ccan/check_type

* Remove unprefixed names in ccan/container_of

* Remove unprefixed names in ccan/list

Co-authored-by: Samuel Williams <samuel.williams@oriontransfer.co.nz>
2022-03-30 20:36:31 +13:00
Peter Zhu ae650f0372 Remove unneeded function declarations in gc.c 2022-03-28 10:02:45 -04:00
Peter Zhu 5f10bd634f Add ISEQ_BODY macro
Use ISEQ_BODY macro to get the rb_iseq_constant_body of the ISeq. Using
this macro will make it easier for us to change the allocation strategy
of rb_iseq_constant_body when using Variable Width Allocation.
2022-03-24 10:03:51 -04:00
John Hawthorn 19f331f588 Dedup superclass array in leaf sibling classes
Previously, we would build a new `superclasses` array for each class,
even though for all immediate subclasses of a class, the array is
identical.

This avoids duplicating the arrays on leaf classes (those without
subclasses) by calculating and storing a "superclasses including self"
array on a class when it's first inherited and sharing that among all
superclasses.

An additional trick used is that the "superclass array including self"
is valid as "self"'s superclass array. It just has it's own class at the
end. We can use this to avoid an extra pointer of storage and can use
one bit of a flag to track that we've "upgraded" the array.
2022-03-03 11:23:27 -08:00
John Hawthorn b13a7c8e36 Constant time class to class ancestor lookup
Previously when checking ancestors, we would walk all the way up the
ancestry chain checking each parent for a matching class or module.

I believe this was especially unfriendly to CPU cache since for each
step we need to check two cache lines (the class and class ext).

This check is used quite often in:
* case statements
* rescue statements
* Calling protected methods
* Class#is_a?
* Module#===
* Module#<=>

I believe it's most common to check a class against a parent class, to
this commit aims to improve that (unfortunately does not help checking
for an included Module).

This is done by storing on each class the number and an array of all
parent classes, in order (BasicObject is at index 0). Using this we can
check whether a class is a subclass of another in constant time since we
know the location to expect it in the hierarchy.
2022-02-23 19:57:42 -08:00
Peter Zhu 71afa8164d Change darray size to size_t and add functions that use GC malloc
Changes size and capacity of darray to size_t to support more
elements.

Adds functions to darray that use GC allocation functions.
2022-02-16 09:50:29 -05:00
Koichi Sasada 1ae630db26 `wmap#each` should check liveness of keys
`ObjectSpace::WeakMap#each*` should check key's liveness.
fix [Bug #18586]
2022-02-16 13:31:46 +09:00
Koichi Sasada 76e594d515 fix GC event synchronization
(1) gc_verify_internal_consistency() use barrier locking
    for consistency while `during_gc == true` at the end
    of the sweep on `RGENGC_CHECK_MODE >= 2`.
(2) `rb_objspace_reachable_objects_from()` is called without
    VM synchronization and it checks `during_gc != true`.

So (1) and (2) causes BUG because of `during_gc == true`.
To prevent this error, wait for VM barrier on `during_gc == false`
and introduce VM locking on `rb_objspace_reachable_objects_from()`.

http://ci.rvm.jp/results/trunk-asserts@phosphorus-docker/3830088
2022-02-14 17:17:55 +09:00
Peter Zhu 2617532499 Free cached mark stack chunks when freeing objspace
Cached mark stack chunks should also be freed when freeing objspace.
2022-02-10 09:33:42 -05:00
Peter Zhu af321ea727 Move total_freed_pages to size pool 2022-02-03 15:06:55 -05:00
Peter Zhu a9221406aa Move total_allocated_pages to size pool 2022-02-03 15:06:55 -05:00
Peter Zhu 424374d330 Fix case when gc_marks_continue does not yield slots
gc_marks_continue will start sweeping when it finishes marking. However,
if the heap we are trying to allocate into is full, then the sweeping
may not yield any free slots. If we don't call gc_sweep_continue
immediate after this, then another GC will be started halfway during
lazy sweeping. gc_sweep_continue will either grow the heap or finish
sweeping.
2022-02-03 09:22:24 -05:00
Peter Zhu 7b77d46671 Decouple GC slot sizes from RVALUE
Add a new macro BASE_SLOT_SIZE that determines the slot size.

For Variable Width Allocation (compiled with USE_RVARGC=1), all slot
sizes are powers-of-2 multiples of BASE_SLOT_SIZE.

For USE_RVARGC=0, BASE_SLOT_SIZE is set to sizeof(RVALUE).
2022-02-02 09:52:04 -05:00
Peter Zhu 605f226142 Fix heap page iteration in gc_verify_heap_page
The for loops are not correctly iterating heap pages in
gc_verify_heap_page.
2022-01-31 09:42:20 -05:00
Nobuyoshi Nakada 67f4729ff0
[Bug#18556] Fallback `MAP_ ANONYMOUS`
Define `MAP_ANONYMOUS` to `MAP_ANON` if undefined on old systems.
2022-01-29 19:07:38 +09:00
Peter Zhu e714163011 Fix typo in assertion in gc.c 2022-01-26 09:45:22 -05:00
Nobuyoshi Nakada 16e7585557
Unpoison the cached object in the exact size 2022-01-26 14:34:25 +09:00
Peter Zhu 82f0580aa4 Call rb_id_table_foreach_values instead
These places never replace the value, so call rb_id_table_foreach_values
instead of rb_id_table_foreach_values_with_replace.
2022-01-25 16:51:16 -05:00
Peter Zhu 4d9ad91a35 Rename rb_id_table_foreach_with_replace
Renames rb_id_table_foreach_with_replace to
rb_id_table_foreach_values_with_replace and passes only the value to the
callback. We can use this in GC compaction when we cannot access the
global symbol array.
2022-01-25 16:51:16 -05:00
Peter Zhu b07879e553 Remove redundant if statement in try_move
The if statement is redundant since if `index == 0` then
`BITS_BITLENGTH * index == 0`.
2022-01-25 09:38:17 -05:00
Peter Zhu 87784fdeb2 Keep right operand within width when right shifting
NUM_IN_PAGE could return a value much larger than 64. According to the
C11 spec 6.5.7 paragraph 3 this is undefined behavior:

> If the value of the right operand is negative or is greater than or
> equal to the width of the promoted left operand, the behavior is
> undefined.

On most platforms, this is usually not a problem as the architecture
will mask off all out-of-range bits.
2022-01-24 14:34:12 -05:00