It seems like `gc_verify_compaction_references` is not protected in case
alignment is wrong. This commit pushes the alignment check down to
`gc_start_internal` so that anyone trying to compact will check page
alignment
I think this method may be getting called on PowerPC and the alignment
might be wrong.
http://rubyci.s3.amazonaws.com/ppc64le/ruby-master/log/20211021T190006Z.fail.html.gz
I'm looking through the places where YJIT needs notifications. It looks
like these changes to gc.c and vm_callinfo.h have become unnecessary
since 84ab77ba59. This commit just makes the diff against upstream
smaller, but otherwise shouldn't change any behavior.
Added UJIT_CHECK_MODE. Set to 1 to double check method dispatch in
generated code.
It's surprising to me that we need to watch both cc and cme. There might
be opportunities to simplify there.
Runtime assertion for the argument declared as non-null.
This macro does nothing if `RBIMPL_ATTR_NONNULL` is effective,
otherwise asserts that the argument is non-null.
Commit 123eeb1c1a added incremental GC
which moved resetting malloc_increase, oldmalloc_increase to before
marking. However, during_minor_gc is not set until gc_marks_start. So
the value will be from the last GC run, rather than the current one.
Before the incremental GC commit, this code was in gc_before_sweep
which ran before sweep (after marking) so the value during_minor_gc
was correct.
When the object is moved back into the T_MOVED, the flags of the T_MOVED
is not copied, so the FL_FROM_FREELIST flag is lost. This causes
total_freed_objects to always be incremented.
When invalidating a page during compaction, the free_slots count should
be updated for the page of the object and not the page of the forwarding
address (since the object gets moved back to the forwarding address).
Force recycled objects could create a freelist for the page. At the
start of sweeping we should append to the freelist to avoid
permanently losing the slots on the freelist.
This commits implements size classes in the GC for the Variable Width
Allocation feature. Unless `USE_RVARGC` compile flag is set, only a
single size class is created, maintaining current behaviour. See the
redmine ticket for more details.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This commit removes T_PAYLOAD since the new VWA implementation no longer
requires T_PAYLOAD types.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This commits implements size classes in the GC for the Variable Width
Allocation feature. Unless `USE_RVARGC` compile flag is set, only a
single size class is created, maintaining current behaviour. See the
redmine ticket for more details.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This commit removes T_PAYLOAD since the new VWA implementation no longer
requires T_PAYLOAD types.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
which have not undefined or redefined it.
When a `T_DATA` object is created whose class has not undefined or
redefined the alloc function, the alloc function now gets undefined by
Data_Wrap_Struct et al. Optionally, a future release may also warn
that this being done.
This should help developers of C extensions to meet the requirements
explained in "doc/extension.rdoc". Without a check like this, there is
no easy way for an author of a C extension to see where they have made
a mistake.
Commit c32218de1b turned during_compacting
flag to 2 bits to support the case when there is no write barrier. But
commit 32b7dcfb56 changed compaction to
always enable the write barrier. This commit cleans up some of the
leftover code.
This fixes cases where exceptions raised using Thread#raise are
swallowed by finalizers and not delivered to the running thread.
This could cause issues with finalizers that rely on pending interrupts,
but that case is expected to be rarer.
Fixes [Bug #13876]
Fixes [Bug #15507]
Co-authored-by: Koichi Sasada <ko1@atdot.net>
This commit adds an assertion has been added after `gc_page_sweep` to
verify that the freelist length is equal to the number of free slots in
the page.
When a Ractor is removed, the freelist in the Ractor cache is not
returned to the GC, leaving the freelist permanently lost. This commit
recycles the freelist when the Ractor is destroyed, preventing a memory
leak from occurring.
If we force recycle an object before the page is swept, we should clear
it in the mark bitmap. If we don't clear it in the bitmap, then during
sweeping we won't account for this free slot so the `free_slots` count
of the page will be incorrect.
When running btest there is a crash when compiled with
RGENGC_CHECK_MODE=4. The crash happens because `during_gc` is not
turned off before `gc_marks_check` is called, causing the marking to
happen on the main mark stack instead of mark stack created in
`objspace_allrefs`.
Redo of 34a2acdac788602c14bf05fb616215187badd504 and
931138b00696419945dc03e10f033b1f53cd50f3 which were reverted.
GitHub PR #4340.
This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.
The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.
```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19]
| |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar | 5.681M| 36.980M|
| | -| 6.51x|
```
Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.
Benchmark code:
```ruby
require "benchmark/ips"
require_relative "config/environment"
Benchmark.ips do |x|
x.report "logger" do
ActiveRecord::Base.logger
end
end
```
Ruby 3.0 master / Rails 6.1:
```
Warming up --------------------------------------
logger 155.251k i/100ms
Calculating -------------------------------------
```
Ruby 3.0 with cvar cache / Rails 6.1:
```
Warming up --------------------------------------
logger 1.546M i/100ms
Calculating -------------------------------------
logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s
```
Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.
Ruby 3.0 master:
```
Warming up --------------------------------------
1 module 1.231M i/100ms
30 modules 432.020k i/100ms
100 modules 145.399k i/100ms
Calculating -------------------------------------
1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s
30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s
100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s
Comparison:
1 module: 12209958.3 i/s
30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower
100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower
```
Ruby 3.0 with cvar cache:
```
Warming up --------------------------------------
1 module 1.641M i/100ms
30 modules 1.655M i/100ms
100 modules 1.620M i/100ms
Calculating -------------------------------------
1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s
30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s
100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s
Comparison:
1 module: 16279458.0 i/s
100 modules: 16087484.6 i/s - same-ish: difference falls within error
30 modules: 15891406.2 i/s - same-ish: difference falls within error
```
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
heap_set_increment essentially only calls heap_allocatable_pages_set.
They only differ in behaviour when `additional_pages == 0`. However,
this is only possible because heap_extend_pages may return 0. This
commit also changes heap_extend_pages to always return at least 1.