`overloaded_cme_table` keeps cme -> monly_cme pairs to manage
corresponding `monly_cme` for `cme`. The lifetime of the `monly_cme`
should be longer than `monly_cme`, but the previous patch losts the
reference to the living `monly_cme`.
Now `overloaded_cme_table` values are always root (keys are only weak
reference), it means `monly_cme` does not freed until corresponding
`cme` is invalidated.
To make managing easy, move `overloaded_cme_table` to `rb_vm_t`.
`def` (`rb_method_definition_t`) is shared by multiple callable
method entries (cme, `rb_callable_method_entry_t`).
There are two issues:
* old -> young reference: `cme1->def->mandatory_only_cme = monly_cme`
if `cme1` is young and `monly_cme` is young, there is no problem.
Howevr, another old `cme2` can refer `def`, in this case, old `cme2`
points young `monly_cme` and it violates gengc assumption.
* cme can have different `defined_class` but `monly_cme` only has
one `defined_class`. It does not make sense and `monly_cme`
should be created for a cme (not `def`).
To solve these issues, this patch allocates `monly_cme` per `cme`.
`cme` does not have another room to store a pointer to the `monly_cme`,
so this patch introduces `overloaded_cme_table`, which is weak key map
`[cme] -> [monly_cme]`.
`def::body::iseqptr::monly_cme` is deleted.
The first issue is reported by Alan Wu.
When using `rp(obj)` for debugging during development, it may be
useful to know that an object is soon to be swept. Add a new letter to
the object dump for whether the object is garbage. It's easy to forget
about lazy sweep.
Except on Windows and MinGW, we can only use compaction on systems that
use mmap (only systems that use mmap can use the read barrier that
compaction requires). We don't need to separately detect whether we can
support compaction or not.
* Lazily create singletons on instance_{exec,eval}
Previously when instance_exec or instance_eval was called on an object,
that object would be given a singleton class so that method
definitions inside the block would be added to the object rather than
its class.
This commit aims to improve performance by delaying the creation of the
singleton class unless/until one is needed for method definition. Most
of the time instance_eval is used without any method definition.
This was implemented by adding a flag to the cref indicating that it
represents a singleton of the object rather than a class itself. In this
case CREF_CLASS returns the object's existing class, but in cases that
we are defining a method (either via definemethod or
VM_SPECIAL_OBJECT_CBASE which is used for undef and alias).
This also happens to fix what I believe is a bug. Previously
instance_eval behaved differently with regards to constant access for
true/false/nil than for all other objects. I don't think this was
intentional.
String::Foo = "foo"
"".instance_eval("Foo") # => "foo"
Integer::Foo = "foo"
123.instance_eval("Foo") # => "foo"
TrueClass::Foo = "foo"
true.instance_eval("Foo") # NameError: uninitialized constant Foo
This also slightly changes the error message when trying to define a method
through instance_eval on an object which can't have a singleton class.
Before:
$ ruby -e '123.instance_eval { def foo; end }'
-e:1:in `block in <main>': no class/module to add method (TypeError)
After:
$ ./ruby -e '123.instance_eval { def foo; end }'
-e:1:in `block in <main>': can't define singleton (TypeError)
IMO this error is a small improvement on the original and better matches
the (both old and new) message when definging a method using `def self.`
$ ruby -e '123.instance_eval{ def self.foo; end }'
-e:1:in `block in <main>': can't define singleton (TypeError)
Co-authored-by: Matthew Draper <matthew@trebex.net>
* Remove "under" argument from yield_under
* Move CREF_SINGLETON_SET into vm_cref_new
* Simplify vm_get_const_base
* Fix leaf VM_SPECIAL_OBJECT_CONST_BASE
Co-authored-by: Matthew Draper <matthew@trebex.net>
suseconds_t, which is the type of tv_usec, may be defined with a longer
size type than tv_nsec's type (long). So usec to nsec conversion needs
an explicit casting.
This commit adds a Ractor cache for every size pool. Previously, all VWA
allocated objects used the slowpath and locked the VM.
On a micro-benchmark that benchmarks String allocation:
VWA turned off:
29.196591 0.889709 30.086300 ( 9.434059)
VWA before this commit:
29.279486 41.477869 70.757355 ( 12.527379)
VWA after this commit:
16.782903 0.557117 17.340020 ( 4.255603)
Updating RCLASS_PARENT_SUBCLASSES and RCLASS_MODULE_SUBCLASSES while
compacting can trigger the read barrier. This commit makes
RCLASS_SUBCLASSES a doubly linked list with a dedicated head object so
that we can add and remove entries from the list without having to touch
an object in the Ruby heap
* `GC.measure_total_time = true` enables total time measurement (default: true)
* `GC.measure_total_time` returns current flag.
* `GC.total_time` returns measured total time in nano seconds.
* `GC.stat(:time)` (and Hash) returns measured total time in milli seconds.
Compare with the C methods, A built-in methods written in Ruby is
slower if only mandatory parameters are given because it needs to
check the argumens and fill default values for optional and keyword
parameters (C methods can check the number of parameters with `argc`,
so there are no overhead). Passing mandatory arguments are common
(optional arguments are exceptional, in many cases) so it is important
to provide the fast path for such common cases.
`Primitive.mandatory_only?` is a special builtin function used with
`if` expression like that:
```ruby
def self.at(time, subsec = false, unit = :microsecond, in: nil)
if Primitive.mandatory_only?
Primitive.time_s_at1(time)
else
Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in))
end
end
```
and it makes two ISeq,
```
def self.at(time, subsec = false, unit = :microsecond, in: nil)
Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in))
end
def self.at(time)
Primitive.time_s_at1(time)
end
```
and (2) is pointed by (1). Note that `Primitive.mandatory_only?`
should be used only in a condition of an `if` statement and the
`if` statement should be equal to the methdo body (you can not
put any expression before and after the `if` statement).
A method entry with `mandatory_only?` (`Time.at` on the above case)
is marked as `iseq_overload`. When the method will be dispatch only
with mandatory arguments (`Time.at(0)` for example), make another
method entry with ISeq (2) as mandatory only method entry and it
will be cached in an inline method cache.
The idea is similar discussed in https://bugs.ruby-lang.org/issues/16254
but it only checks mandatory parameters or more, because many cases
only mandatory parameters are given. If we find other cases (optional
or keyword parameters are used frequently and it hurts performance),
we can extend the feature.
With RVARGC we always store the rb_classext_t in the same slot as the
RClass struct that refers to it. So we don't need to store the pointer
or access through the pointer anymore and can switch the RCLASS_EXT
macro to use an offset
This commit fixes a memory leak introduced in an early part of the
variable width allocation project that would prevent the rb_classext_t
struct from being free'd when the class is swept.
This commit deprecates rb_gc_force_recycle and coverts it to a no-op
function. Also removes invalidate_mark_stack_chunk since only
rb_gc_force_recycle uses it.
```
../gc.c:2342:45: warning: comparison of integers of different signs: 'short' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
GC_ASSERT(size_pools[pool_id].slot_size == slot_size);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~
```
Add cast to short, because `GC_ASSERT`s in `size_pool_for_size`
already use cast to short.
```
../gc.c:2342:25: warning: array subscript is of type 'char' [-Wchar-subscripts]
GC_ASSERT(size_pools[pool_id].slot_size == slot_size);
^~~~~~~~
```
It seems like `gc_verify_compaction_references` is not protected in case
alignment is wrong. This commit pushes the alignment check down to
`gc_start_internal` so that anyone trying to compact will check page
alignment
I think this method may be getting called on PowerPC and the alignment
might be wrong.
http://rubyci.s3.amazonaws.com/ppc64le/ruby-master/log/20211021T190006Z.fail.html.gz
I'm looking through the places where YJIT needs notifications. It looks
like these changes to gc.c and vm_callinfo.h have become unnecessary
since 84ab77ba59. This commit just makes the diff against upstream
smaller, but otherwise shouldn't change any behavior.
Added UJIT_CHECK_MODE. Set to 1 to double check method dispatch in
generated code.
It's surprising to me that we need to watch both cc and cme. There might
be opportunities to simplify there.
Runtime assertion for the argument declared as non-null.
This macro does nothing if `RBIMPL_ATTR_NONNULL` is effective,
otherwise asserts that the argument is non-null.
Commit 123eeb1c1a added incremental GC
which moved resetting malloc_increase, oldmalloc_increase to before
marking. However, during_minor_gc is not set until gc_marks_start. So
the value will be from the last GC run, rather than the current one.
Before the incremental GC commit, this code was in gc_before_sweep
which ran before sweep (after marking) so the value during_minor_gc
was correct.
When the object is moved back into the T_MOVED, the flags of the T_MOVED
is not copied, so the FL_FROM_FREELIST flag is lost. This causes
total_freed_objects to always be incremented.
When invalidating a page during compaction, the free_slots count should
be updated for the page of the object and not the page of the forwarding
address (since the object gets moved back to the forwarding address).
Force recycled objects could create a freelist for the page. At the
start of sweeping we should append to the freelist to avoid
permanently losing the slots on the freelist.
This commits implements size classes in the GC for the Variable Width
Allocation feature. Unless `USE_RVARGC` compile flag is set, only a
single size class is created, maintaining current behaviour. See the
redmine ticket for more details.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This commit removes T_PAYLOAD since the new VWA implementation no longer
requires T_PAYLOAD types.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This commits implements size classes in the GC for the Variable Width
Allocation feature. Unless `USE_RVARGC` compile flag is set, only a
single size class is created, maintaining current behaviour. See the
redmine ticket for more details.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This commit removes T_PAYLOAD since the new VWA implementation no longer
requires T_PAYLOAD types.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
which have not undefined or redefined it.
When a `T_DATA` object is created whose class has not undefined or
redefined the alloc function, the alloc function now gets undefined by
Data_Wrap_Struct et al. Optionally, a future release may also warn
that this being done.
This should help developers of C extensions to meet the requirements
explained in "doc/extension.rdoc". Without a check like this, there is
no easy way for an author of a C extension to see where they have made
a mistake.
Commit c32218de1b turned during_compacting
flag to 2 bits to support the case when there is no write barrier. But
commit 32b7dcfb56 changed compaction to
always enable the write barrier. This commit cleans up some of the
leftover code.
This fixes cases where exceptions raised using Thread#raise are
swallowed by finalizers and not delivered to the running thread.
This could cause issues with finalizers that rely on pending interrupts,
but that case is expected to be rarer.
Fixes [Bug #13876]
Fixes [Bug #15507]
Co-authored-by: Koichi Sasada <ko1@atdot.net>
This commit adds an assertion has been added after `gc_page_sweep` to
verify that the freelist length is equal to the number of free slots in
the page.
When a Ractor is removed, the freelist in the Ractor cache is not
returned to the GC, leaving the freelist permanently lost. This commit
recycles the freelist when the Ractor is destroyed, preventing a memory
leak from occurring.
If we force recycle an object before the page is swept, we should clear
it in the mark bitmap. If we don't clear it in the bitmap, then during
sweeping we won't account for this free slot so the `free_slots` count
of the page will be incorrect.
When running btest there is a crash when compiled with
RGENGC_CHECK_MODE=4. The crash happens because `during_gc` is not
turned off before `gc_marks_check` is called, causing the marking to
happen on the main mark stack instead of mark stack created in
`objspace_allrefs`.
Redo of 34a2acdac788602c14bf05fb616215187badd504 and
931138b00696419945dc03e10f033b1f53cd50f3 which were reverted.
GitHub PR #4340.
This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.
The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.
```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19]
| |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar | 5.681M| 36.980M|
| | -| 6.51x|
```
Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.
Benchmark code:
```ruby
require "benchmark/ips"
require_relative "config/environment"
Benchmark.ips do |x|
x.report "logger" do
ActiveRecord::Base.logger
end
end
```
Ruby 3.0 master / Rails 6.1:
```
Warming up --------------------------------------
logger 155.251k i/100ms
Calculating -------------------------------------
```
Ruby 3.0 with cvar cache / Rails 6.1:
```
Warming up --------------------------------------
logger 1.546M i/100ms
Calculating -------------------------------------
logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s
```
Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.
Ruby 3.0 master:
```
Warming up --------------------------------------
1 module 1.231M i/100ms
30 modules 432.020k i/100ms
100 modules 145.399k i/100ms
Calculating -------------------------------------
1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s
30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s
100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s
Comparison:
1 module: 12209958.3 i/s
30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower
100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower
```
Ruby 3.0 with cvar cache:
```
Warming up --------------------------------------
1 module 1.641M i/100ms
30 modules 1.655M i/100ms
100 modules 1.620M i/100ms
Calculating -------------------------------------
1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s
30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s
100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s
Comparison:
1 module: 16279458.0 i/s
100 modules: 16087484.6 i/s - same-ish: difference falls within error
30 modules: 15891406.2 i/s - same-ish: difference falls within error
```
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
heap_set_increment essentially only calls heap_allocatable_pages_set.
They only differ in behaviour when `additional_pages == 0`. However,
this is only possible because heap_extend_pages may return 0. This
commit also changes heap_extend_pages to always return at least 1.
If we are during incremental sweeping when calling gc_set_initial_pages
there is an assertion error. The following patch will artificially
produce the bug:
```
diff --git a/gc.c b/gc.c
index c3157dbe2c..d7282cf8f0 100644
--- a/gc.c
+++ b/gc.c
@@ -404,7 +404,7 @@ int ruby_rgengc_debug;
* 5: show all references
*/
#ifndef RGENGC_CHECK_MODE
-#define RGENGC_CHECK_MODE 0
+#define RGENGC_CHECK_MODE 1
#endif
// Note: using RUBY_ASSERT_WHEN() extend a macro in expr (info by nobu).
@@ -10821,6 +10821,10 @@ gc_set_initial_pages(void)
void
ruby_gc_set_params(void)
{
+ for (int i = 0; i < 10000; i++) {
+ rb_ary_new();
+ }
+
/* RUBY_GC_HEAP_FREE_SLOTS */
if (get_envparam_size("RUBY_GC_HEAP_FREE_SLOTS", &gc_params.heap_free_slots, 0)) {
/* ok */
```
The crash looks like:
```
Assertion Failed: ../gc.c:2038:heap_add_page:!(heap == heap_eden && heap->sweeping_page)
```
NUM_IN_PAGE(page->start) will sometimes return a 0 or a 1 depending on
how the alignment of the 40 byte slots work out. This commit uses the
NUM_IN_PAGE function to shift the bitmap down on the first bitmap plane.
Iterating on the first bitmap plane is "special", but this commit allows
us to align object addresses on something besides 40 bytes, and also
eliminates the need to fill guard bits.
We are only iterating over the eden heap so `heap_eden->total_pages`
contains the exact number of pages we need to allocate for.
`heap_allocated_pages` may contain pages in the tomb.
Instead of keeping track of the current bit plane, keep track of the
actual slot when compacting. This means we don't need to re-scan
objects inside the same bit plane when we continue with movement
When objects are popped from the mark stack, we check that the object is
the right type (otherwise an rb_bug happens). The problem is that when
we pop a bad object from the stack, we have no idea what pushed the bad
object on the stack.
This change makes an error happen when a bad object is pushed on the
mark stack, that way we can track down the source of the bug.
Since compaction can be concurrent, the machine stack is allowed to
change while compaction is happening. When compaction finishes, there
may be references on the machine stack that need to be reverted so that
we can remove the read barrier.
This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.
The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.
```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105ca45) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be0093ae) [x86_64-darwin19]
| |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar | 5.681M| 36.980M|
| | -| 6.51x|
```
Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.
Benchmark code:
```ruby
require "benchmark/ips"
require_relative "config/environment"
Benchmark.ips do |x|
x.report "logger" do
ActiveRecord::Base.logger
end
end
```
Ruby 3.0 master / Rails 6.1:
```
Warming up --------------------------------------
logger 155.251k i/100ms
Calculating -------------------------------------
```
Ruby 3.0 with cvar cache / Rails 6.1:
```
Warming up --------------------------------------
logger 1.546M i/100ms
Calculating -------------------------------------
logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s
```
Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.
Ruby 3.0 master:
```
Warming up --------------------------------------
1 module 1.231M i/100ms
30 modules 432.020k i/100ms
100 modules 145.399k i/100ms
Calculating -------------------------------------
1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s
30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s
100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s
Comparison:
1 module: 12209958.3 i/s
30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower
100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower
```
Ruby 3.0 with cvar cache:
```
Warming up --------------------------------------
1 module 1.641M i/100ms
30 modules 1.655M i/100ms
100 modules 1.620M i/100ms
Calculating -------------------------------------
1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s
30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s
100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s
Comparison:
1 module: 16279458.0 i/s
100 modules: 16087484.6 i/s - same-ish: difference falls within error
30 modules: 15891406.2 i/s - same-ish: difference falls within error
```
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
The current fix for PAGE_SIZE macro detection in autoconf does not work
correctly. I see the following output with running configure on Linux:
```
checking PAGE_SIZE is defined... no
```
Linux has PAGE_SIZE macro. This is happening because the macro exists in
sys/user.h and not in the malloc headers.
This allows us to allocate the right size for the object in advance,
meaning that we don't have to pay the cost of ivar table extension
later. The idea is that if an object type ever became "extended" at
some point, then it is very likely it will become extended again. So we
may as well allocate the ivar table up front.
Previously imemo_ast was handled as WB-protected which caused a segfault
of the following code:
# shareable_constant_value: literal
M0 = {}
M1 = {}
...
M100000 = {}
My analysis is here: `shareable_constant_value: literal` creates many
Hash instances during parsing, and add them to node_buffer of imemo_ast.
However, the contents are missed because imemo_ast is incorrectly
WB-protected.
This changeset makes imemo_ast as WB-unprotected.
IV index tables weren't being freed. This program would leak memory:
```ruby
loop do
k = Class.new do
def initialize
@a = 1
@b = 1
@c = 1
@d = 1
@e = 1
@f = 1
@g = 1
end
end
k.new
end
```
This commit fixes the leak.
Emscripten provides "emscripten_scan_stack" to get the beginning and end
pointers of the stack for conservative GC.
Also, "emscripten_scan_registers" allows the GC to mark local variables
in WASM.
`rb_define_const` can add objects as "mark objects". This is to make
code like this work:
33d6e92e0c/ext/etc/etc.c (L1201)
```
rb_define_const(rb_cStruct, "Passwd", sPasswd); /* deprecated name */
```
sPasswd is a heap allocated object that is also a C global, so we can't
move it (it needs to be pinned). However, we have many calls to
`rb_define_const` that just pass in an integer like this:
```
rb_define_const(rb_cDBM, "WRITER", INT2FIX(O_RDWR|RUBY_DBM_RW_BIT));
```
Non heap allocated objects like integers will never move, so there is no
reason to waste time in the GC marking / pinning them.
Because of `double` in `RFloat`, `RValue` would be packed by
`sizeof(double)` by default, on platforms where `double` is wider
than `VALUE`. Size of `RValue` is multiple of 5 now.
[A previous commit](b59077eecf) removes some macro definitions that are used when RGENGC_CHECK_MODE >=4 because they were using data stored against objspace, which is not ractor safe
This commit reinstates those macro definitions, using the current ractor
Co-authored-by: peterzhu2118 <peter@peterzhu.ca>
Some objects can survive the GC before compaction, but get collected in
the second compaction. This means we could have objects reference
T_MOVED during "free" in the second, compacting GC. If that is the
case, we need to invalidate those "moved" addresses. Invalidation is
done via read barrier, so we need to make sure the read barrier is
active even during `GC.compact`.
This also means we don't actually need to do one GC before compaction,
we can just do the compaction and GC in one step.
constant cache `IC` is accessed by non-atomic manner and there are
thread-safety issues, so Ruby 3.0 disables to use const cache on
non-main ractors.
This patch enables it by introducing `imemo_constcache` and allocates
it by every re-fill of const cache like `imemo_callcache`.
[Bug #17510]
Now `IC` only has one entry `IC::entry` and it points to
`iseq_inline_constant_cache_entry`, managed by T_IMEMO object.
`IC` is atomic data structure so `rb_mjit_before_vm_ic_update()` and
`rb_mjit_after_vm_ic_update()` is not needed.
separate some fields from rb_ractor_t to rb_ractor_pub and put it
at the beggining of rb_ractor_t and declare it in vm_core.h so
vm_core.h can access rb_ractor_pub fields.
Now rb_ec_ractor_hooks() is a complete inline function and no
MJIT related issue.
ObjectSpace._id2ref(id) can return any objects even if they are
unshareable, so this patch raises RangeError if it runs on multi-ractor
mode and the found object is unshareable.
Per ractor method cache (GH-#3842) only cached 1 page and this patch
caches several pages to keep at least 512 free slots if available.
If you increase the number of cached free slots, all cached slots
will be collected when the GC is invoked.
A program with multiple ractors can consume more objects per
unit time, so this patch set minimum/maximum free_slots to
relative to ractors count (upto 8).
Lazy sweep tries to collect free (unused) slots incrementally, and
it only collect a few pages. This patch makes lazy sweep collects
more objects (at least 2048 objects) and GC overhead of multi-ractor
execution will be reduced.
Write barrier requires VM lock because it accesses VM global bitmap
but RB_VM_LOCK_ENTER() can invoke GC because another ractor can wait
to invoke GC and RB_VM_LOCK_ENTER() is barrier point. This means that
before protecting by a write barrier, GC can invoke.
To prevent such situation, RB_VM_LOCK_ENTER_NO_BARRIER() is introduced.
This lock primitive does not become GC barrier points.
Now object allocation requires VM global lock to synchronize objspace.
However, of course, it introduces huge overhead.
This patch caches some slots (in a page) by each ractor and use cached
slots for object allocation. If there is no cached slots, acquire the global lock
and get new cached slots, or start GC (marking or lazy sweeping).
This seems to be breaking the build for some reason.
This command can reproduce it:
`make yes-test-all TESTS=--repeat-count=20`
This reverts commit 88bb1a672c.
Incremental sweeping should sweep until we find a slot for objects to
use. `heap_increment` was adding a page to the heap even though we
would sweep immediately after.
Co-authored-by: John Hawthorn <john@hawthorn.email>
To manage ractor-local data for C extension, the following APIs
are defined.
* rb_ractor_local_storage_value_newkey
* rb_ractor_local_storage_value
* rb_ractor_local_storage_value_set
* rb_ractor_local_storage_ptr_newkey
* rb_ractor_local_storage_ptr
* rb_ractor_local_storage_ptr_set
At first, you need to create a key of storage by
rb_ractor_local_(value|ptr)_newkey().
For ptr storage, it accepts the type of storage,
how to mark and how to free with ractor's lifetime.
rb_ractor_local_storage_value/set are used to access a VALUE
and rb_ractor_local_storage_ptr/set are used to access a pointer.
random.c uses this API.
Both explicit compaction routines (gc_compact and the verify references form)
need to clear the heap before executing compaction. Otherwise some
objects may not be alive, and we'll need the read barrier. The heap
must only contain *live* objects if we want to disable the read barrier
during explicit compaction.
The previous commit was missing the "clear the heap" phase from the
"verify references" explicit compaction function.
Fixes [Bug #17306]
Auto Compaction uses mprotect to implement a read barrier. mprotect can
only work on regions of memory that are a multiple of the OS page size.
Ruby's pages are a multiple of 4kb, but some platforms (like ppc64le)
don't have 4kb page sizes. This commit disables the features on those
platforms.
Fixes [Bug #17306]
To make some kind of Ractor related extensions, some functions
should be exposed.
* include/ruby/thread_native.h
* rb_native_mutex_*
* rb_native_cond_*
* include/ruby/ractor.h
* RB_OBJ_SHAREABLE_P(obj)
* rb_ractor_shareable_p(obj)
* rb_ractor_std*()
* rb_cRactor
and rm ractor_pub.h
and rename srcdir/ractor.h to srcdir/ractor_core.h
(to avoid conflict with include/ruby/ractor.h)
* `GC.auto_compact=`, `GC.auto_compact` can be used to control when
compaction runs. Setting `auto_compact=` to true will cause
compaction to occurr duing major collections. At the moment,
compaction adds significant overhead to major collections, so please
test first!
[Feature #17176]
As of 0b81a484f3, `ROBJECT_IVPTR` will
always return a value, so we don't need to test whether or not we got
one. T_OBJECTs always come to life as embedded objects, so they will
return an ivptr, and when they become "unembedded" they will have an
ivptr at that point too
We are seeing an error where code that is generated with MJIT contains
references to objects that have been moved. I believe this is due to a
race condition in the compaction function.
`gc_compact` has two steps:
1. Run a full GC to pin objects
2. Compact / update references
Step one is executed with `garbage_collect`. `garbage_collect` calls
`gc_enter` / `gc_exit`, these functions acquire a JIT lock and release a
JIT lock. So a lock is held for the duration of step 1.
Step two is executed by `gc_compact_after_gc`. It also holds a JIT
lock.
I believe the problem is that the JIT is free to execute between step 1
and step 2. It copies call cache values, but doesn't pin them when it
copies them. So the compactor thinks it's OK to move the call cache
even though it is not safe.
We need to hold a lock for the duration of `garbage_collect` *and*
`gc_compact_after_gc`. This patch introduces a lock level which
increments and decrements. The compaction function can increment and
decrement the lock level and prevent MJIT from executing during both
steps.
rb_objspace_reachable_objects_from(obj) is used to traverse all
reachable objects from obj. This function modify objspace but it
is not ractor-safe (thread-safe). This patch fix the problem.
Strategy:
(1) call GC mark process during_gc
(2) call Ractor-local custom mark func when !during_gc
Unshareable objects should not be touched from multiple ractors
so ObjectSpace.each_object should be restricted. On multi-ractor
mode, ObjectSpace.each_object only iterates shareable objects.
[Feature #17270]
iv_index_tbl manages instance variable indexes (ID -> index).
This data structure should be synchronized with other ractors
so introduce some VM locks.
This patch also introduced atomic ivar cache used by
set/getinlinecache instructions. To make updating ivar cache (IVC),
we changed iv_index_tbl data structure to manage (ID -> entry)
and an entry points serial and index. IVC points to this entry so
that cache update becomes atomically.
Heap allocated objects are never special constants. Since we're walking
the heap, we know none of these objects can be special. Also, adding
the object to the freelist will poison the object, so we can't check
that the type is T_NONE after poison.
When finalizers run (in `rb_objspace_call_finalizer`) the table is
copied to a linked list that is not managed by the GC. If compaction
runs, the references in the linked list can go bad.
Finalizer table shouldn't be used frequently, so lets pin references in
the table so that the linked list in `rb_objspace_call_finalizer` is
safe.
rb_obj_raw_info is called while printing out crash messages and
sometimes called during garbage collection. Calling rb_raise() in these
situations is undesirable because it can start executing ensure blocks.
This commit introduces Ractor mechanism to run Ruby program in
parallel. See doc/ractor.md for more details about Ractor.
See ticket [Feature #17100] to see the implementation details
and discussions.
[Feature #17100]
This commit does not complete the implementation. You can find
many bugs on using Ractor. Also the specification will be changed
so that this feature is experimental. You will see a warning when
you make the first Ractor with `Ractor.new`.
I hope this feature can help programmers from thread-safety issues.
Previously, when an object is first initialized, ROBJECT_EMBED isn't
set. This means that for brand new objects, ROBJECT_NUMIV(obj) is 0 and
ROBJECT_IV_INDEX_TBL(obj) is NULL.
Previously, this combination meant that the inline cache would never be
initialized when setting an ivar on an object for the first time since
iv_index_tbl was NULL, and if it were it would never be used because
ROBJECT_NUMIV was 0. Both cases always fell through to the generic
rb_ivar_set which would then set the ROBJECT_EMBED flag and initialize
the ivar array.
This commit changes rb_class_allocate_instance to set the ROBJECT_EMBED
flag on the object initially and to initialize all members of the
embedded array to Qundef. This allows the inline cache to be set
correctly on first use and to be used on future uses.
This moves rb_class_allocate_instance to gc.c, so that it has access to
newobj_of. This seems appropriate given that there are other allocating
methods in this file (ex. rb_data_object_wrap, rb_imemo_new).
Ruby strings don't always have a null terminator, so we can't use
it as a regular C string. By reading only the first len bytes of
the Ruby string, we won't read past the end of the Ruby string.
`rb_objspace_call_finalizer` creates zombies, but does not do the correct accounting (it should increment `heap_pages_final_slots` whenever it creates a zombie). When we do correct accounting, `heap_pages_final_slots` should never underflow (the check for underflow was introduced in 39725a4db6).
The implementation moves the accounting from the functions that call `make_zombie` into `make_zombie` itself, which reduces code duplication.
Before this commit, iclasses were "shady", or not protected by write
barriers. Because of that, the GC needs to spend more time marking these
objects than otherwise.
Applications that make heavy use of modules should see reduction in GC
time as they have a significant number of live iclasses on the heap.
- Put logic for iclass method table ownership into a function
- Remove calls to WB_UNPROTECT and insert write barriers for iclasses
This commit relies on the following invariant: for any non oirigin
iclass `I`, `RCLASS_M_TBL(I) == RCLASS_M_TBL(RBasic(I)->klass)`. This
invariant did not hold prior to 98286e9 for classes and modules that
have prepended modules.
[Feature #16984]
To optimize the sweep phase, there is bit operation to set mark
bits for out-of-range bits in the last bit_t.
However, if there is no out-of-ragnge bits, it set all last bit_t
as mark bits and it braek the assumption (unmarked objects will
be swept).
GC_DEBUG=1 makes sizeof(RVALUE)=64 on my machine and this condition
happens.
It took me one Saturday to debug this.
imemo_callcache and imemo_callinfo were not handled by the `objspace`
module and were showing up as "unknown" in the dump. Extract the code for
naming imemos and use that in both the GC and the `objspace` module.
This commit expands heap pages to be exactly 16KiB and eliminates the
`REQUIRED_SIZE_BY_MALLOC` constant.
I believe the goal of `REQUIRED_SIZE_BY_MALLOC` was to make the heap
pages consume some multiple of OS page size. 16KiB is convenient because
OS page size is typically 4KiB, so one Ruby page is four OS pages.
Do not guess how malloc works
=============================
We should not try to guess how `malloc` works and instead request (and
use) four OS pages.
Here is my reasoning:
1. Not all mallocs will store metadata in the same region as user requested
memory. jemalloc specifically states[1]:
> Information about the states of the runs is stored as a page map at the beginning of each chunk.
2. We're using `posix_memalign` to request memory. This means that the
first address must be divisible by the alignment. Our allocation is
page aligned, so if malloc is storing metadata *before* the page,
then we've already crossed page boundaries.
3. Some allocators like glibc will use the memory at the end of the
page. I am able to demonstrate that glibc will return pointers
within the page boundary that contains `heap_page_body`[2]. We
*expected* the allocation to look like this:
![Expected alignment](https://user-images.githubusercontent.com/3124/85803661-8a81d600-b6fc-11ea-8cb6-7dbdb434a43b.png)
But since `heap_page` is allocated immediately after
`heap_page_body`[3], instead the layout looks like this:
![Actual alignment](https://user-images.githubusercontent.com/3124/85803714-a1c0c380-b6fc-11ea-8c17-8b37369e17ee.png)
This is not optimal because `heap_page` gets allocated immediately
after `heap_page_body`. We frequently write to `heap_page`, so the
bottom OS page of `heap_page_body` is very likely to be copied.
One more object per page
========================
In jemalloc, allocation requests are rounded to the nearest boundary,
which in this case is 16KiB[4], so `REQUIRED_SIZE_BY_MALLOC` space is
just wasted on jemalloc.
On glibc, the space is not wasted, but instead it is very likely to
cause page faults.
Instead of wasting space or causing page faults, lets just use the space
to store one more Ruby object. Using the space to store one more Ruby
object will prevent page faults, stop wasting space, decrease memory
usage, decrease GC time, etc.
1. https://people.freebsd.org/~jasone/jemalloc/bsdcan2006/jemalloc.pdf
2. 33390d15e7
3 289a28e68f/gc.c (L1757-L1763)
4. https://people.freebsd.org/~jasone/jemalloc/bsdcan2006/jemalloc.pdf page 4
Co-authored-by: John Hawthorn <john@hawthorn.email>