Граф коммитов

2139 Коммитов

Автор SHA1 Сообщение Дата
Peter Zhu 9770bf23b7 Fix malloc_increase is not correctly calculated
Commit 123eeb1c1a added incremental GC
which moved resetting malloc_increase, oldmalloc_increase to before
marking. However, during_minor_gc is not set until gc_marks_start. So
the value will be from the last GC run, rather than the current one.
Before the incremental GC commit, this code was in gc_before_sweep
which ran before sweep (after marking) so the value during_minor_gc
was correct.
2021-09-20 09:01:58 -04:00
Peter Zhu db51bcada4 Fix total_freed_objects for invalidated pages
When the object is moved back into the T_MOVED, the flags of the T_MOVED
is not copied, so the FL_FROM_FREELIST flag is lost. This causes
total_freed_objects to always be incremented.
2021-09-15 09:59:37 -04:00
Peter Zhu a65ac2d6fa Don't overwrite free_slots count during sweeping
gc_compact_finish may invalidate pages, which may move objects from this
page to other pages, which updates the free_slots of this page.
2021-09-15 09:00:42 -04:00
Peter Zhu e624d0d202 Update the free_slots count of the original page
When invalidating a page during compaction, the free_slots count should
be updated for the page of the object and not the page of the forwarding
address (since the object gets moved back to the forwarding address).
2021-09-15 09:00:42 -04:00
S-H-GAMELINKS 032534dbdf Using RB_BIGNUM_TYPE_P macro 2021-09-11 09:13:24 +09:00
卜部昌平 64f271241d suppress GCC's -Wnonnull-compare
This particular NULL check must be a good thing to do both statically
and dynamically.
2021-09-10 20:00:06 +09:00
S-H-GAMELINKS bdd6d8746f Replace RBOOL macro 2021-09-05 23:01:27 +09:00
Peter Zhu 0aa82b592f Remove heap_is_swept_object function
is_swept_object just calls heap_is_swept_object so remove
heap_is_swept_object.
2021-09-01 13:42:22 -04:00
Peter Zhu ed31bdfeee Fix memory leak in Variable Width Allocation
Force recycled objects could create a freelist for the page. At the
start of sweeping we should append to the freelist to avoid
permanently losing the slots on the freelist.
2021-08-27 10:13:32 -04:00
Peter Zhu 62bc4a9420 [Feature #18045] Implement size classes for GC
This commits implements size classes in the GC for the Variable Width
Allocation feature. Unless `USE_RVARGC` compile flag is set, only a
single size class is created, maintaining current behaviour. See the
redmine ticket for more details.

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-08-25 09:28:21 -04:00
Peter Zhu c08d4067be [Feature #18045] Remove T_PAYLOAD
This commit removes T_PAYLOAD since the new VWA implementation no longer
requires T_PAYLOAD types.

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-08-25 09:28:21 -04:00
Peter Zhu 6648b411f7 Replace intptr_t with uintptr_t in gc.c
Pointers may be large to the point where intptr_t would be negative.
This is problematic when doing comparisons of pointers.
2021-08-23 14:57:52 -04:00
Peter Zhu eddd369e73 Revert "[Feature #18045] Implement size classes for GC"
This reverts commits 48ff7a9f3e
and b2e2cf2ded because it is causing
crashes in SPARC solaris and i386 debian.
2021-08-23 10:54:53 -04:00
Peter Zhu b2e2cf2ded [Feature #18045] Implement size classes for GC
This commits implements size classes in the GC for the Variable Width
Allocation feature. Unless `USE_RVARGC` compile flag is set, only a
single size class is created, maintaining current behaviour. See the
redmine ticket for more details.

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-08-23 09:15:42 -04:00
Peter Zhu 48ff7a9f3e [Feature #18045] Remove T_PAYLOAD
This commit removes T_PAYLOAD since the new VWA implementation no longer
requires T_PAYLOAD types.

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-08-23 09:15:42 -04:00
Nobuyoshi Nakada 4c93c124c2
Turned the reminder comment to a compile-time message 2021-08-20 15:18:13 +09:00
Mike Dalessio e8e3b7a0e2 Undefine the alloc function for T_DATA classes
which have not undefined or redefined it.

When a `T_DATA` object is created whose class has not undefined or
redefined the alloc function, the alloc function now gets undefined by
Data_Wrap_Struct et al. Optionally, a future release may also warn
that this being done.

This should help developers of C extensions to meet the requirements
explained in "doc/extension.rdoc". Without a check like this, there is
no easy way for an author of a C extension to see where they have made
a mistake.
2021-08-20 08:30:06 +09:00
Nobuyoshi Nakada ee7bd7d732
`SIZE_MAX` is not `size_t` on emscripten 2021-08-16 15:36:37 +09:00
Peter Zhu 79cc566ab4 Make during_compacting flag in GC one bit
Commit c32218de1b turned during_compacting
flag to 2 bits to support the case when there is no write barrier. But
commit 32b7dcfb56 changed compaction to
always enable the write barrier. This commit cleans up some of the
leftover code.
2021-08-11 09:26:19 -04:00
Nobuyoshi Nakada 587f501c7c
Make bit flags `reason` unsigned 2021-08-08 17:13:33 +09:00
Nobuyoshi Nakada f81964568f
Suppress warnings when GC_ENABLE_INCREMENTAL_MARK == 0 2021-08-08 15:13:49 +09:00
S.H 378e8cdad6
Using RBOOL macro 2021-08-02 12:06:44 +09:00
Jeremy Evans 87b327efe6 Do not check pending interrupts when running finalizers
This fixes cases where exceptions raised using Thread#raise are
swallowed by finalizers and not delivered to the running thread.

This could cause issues with finalizers that rely on pending interrupts,
but that case is expected to be rarer.

Fixes [Bug #13876]
Fixes [Bug #15507]

Co-authored-by: Koichi Sasada <ko1@atdot.net>
2021-07-29 09:44:11 -07:00
Nobuyoshi Nakada 377995035a Suppress exception message in finalizer [Feature #17798] 2021-07-23 12:01:15 +09:00
Nobuyoshi Nakada fc4dd45d01 Show exception in finalizer [Feature #17798] 2021-07-23 12:01:15 +09:00
Nobuyoshi Nakada 63e5f4df38 Access rb_execution_context_t::errinfo directly 2021-07-23 12:01:15 +09:00
Nobuyoshi Nakada b726c4ee38 Use rb_equal
It can be optimized and handles Qnil properly.
2021-07-23 10:25:37 +09:00
Nobuyoshi Nakada 4da07ac2f3 Finalizers no longer store the safe level 2021-07-23 10:25:37 +09:00
Peter Zhu 62661dd9e4 Don't recompute the heap page
We already page the page of the zombie calculated. Don't recalculate the
page.
2021-07-22 10:10:23 -04:00
Peter Zhu 018f3961ae Don't set flags in finalize_list
The call after it to `heap_page_add_freeobj` will set the flags.
2021-07-22 10:10:23 -04:00
Peter Zhu 31144fe987 Change GC verification to walk all pages
`gc_verify_internal_consistency_` does not walk pages in the tomb heap
so numbers were off. This commit changes it to walk all allocated pages.
2021-07-21 14:40:44 -04:00
Peter Zhu e5fe48646c [Bug #18014] Add assertion to verify freelist
This commit adds an assertion has been added after `gc_page_sweep` to
verify that the freelist length is equal to the number of free slots in
the page.
2021-07-15 11:48:52 -04:00
Peter Zhu 4a627dbdfd [Bug #18014] Fix memory leak in GC when using Ractors
When a Ractor is removed, the freelist in the Ractor cache is not
returned to the GC, leaving the freelist permanently lost. This commit
recycles the freelist when the Ractor is destroyed, preventing a memory
leak from occurring.
2021-07-15 11:48:52 -04:00
Peter Zhu 119697f61e [Bug #18014] Fix rb_gc_force_recycle unmark before sweep
If we force recycle an object before the page is swept, we should clear
it in the mark bitmap. If we don't clear it in the bitmap, then during
sweeping we won't account for this free slot so the `free_slots` count
of the page will be incorrect.
2021-07-15 11:48:52 -04:00
Nobuyoshi Nakada cb3eb3d7d5
Get rid of conflict in ccan/list
Undefine LIST_HEAD from BSD-origin sys/queue.h.
2021-07-10 17:39:25 +09:00
Yusuke Endoh 1293042307 gc.c: use each_stack_location for emscripten
follow up of e4e416380d
2021-07-07 17:17:52 +09:00
Peter Zhu 4a3df35239 Use stride passed into os_obj_of_i 2021-06-30 16:12:03 -04:00
Peter Zhu 03dc664493 Fix crash on RGENGC_CHECK_MODE=4
When running btest there is a crash when compiled with
RGENGC_CHECK_MODE=4. The crash happens because `during_gc` is not
turned off before `gc_marks_check` is called, causing the marking to
happen on the main mark stack instead of mark stack created in
`objspace_allrefs`.
2021-06-29 09:28:07 -04:00
eileencodes 4f77a54f07 Fix asan error when walking heap for T_PAYLOAD objects
Related to https://bugs.ruby-lang.org/issues/18001

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-06-22 14:34:08 -07:00
eileencodes b91b3bc771 Add a cache for class variables
Redo of 34a2acdac788602c14bf05fb616215187badd504 and
931138b00696419945dc03e10f033b1f53cd50f3 which were reverted.

GitHub PR #4340.

This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.

The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.

```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19]

|         |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar  |      5.681M|   36.980M|
|         |           -|     6.51x|
```

Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.

Benchmark code:

```ruby
require "benchmark/ips"
require_relative "config/environment"

Benchmark.ips do |x|
  x.report "logger" do
    ActiveRecord::Base.logger
  end
end
```

Ruby 3.0 master / Rails 6.1:

```
Warming up --------------------------------------
              logger   155.251k i/100ms
Calculating -------------------------------------
```

Ruby 3.0 with cvar cache /  Rails 6.1:

```
Warming up --------------------------------------
              logger     1.546M i/100ms
Calculating -------------------------------------
              logger     14.857M (± 4.8%) i/s -     74.198M in   5.006202s
```

Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.

Ruby 3.0 master:

```
Warming up --------------------------------------
            1 module     1.231M i/100ms
          30 modules   432.020k i/100ms
         100 modules   145.399k i/100ms
Calculating -------------------------------------
            1 module     12.210M (± 2.1%) i/s -     61.553M in   5.043400s
          30 modules      4.354M (± 2.7%) i/s -     22.033M in   5.063839s
         100 modules      1.434M (± 2.9%) i/s -      7.270M in   5.072531s

Comparison:
            1 module: 12209958.3 i/s
          30 modules:  4354217.8 i/s - 2.80x  (± 0.00) slower
         100 modules:  1434447.3 i/s - 8.51x  (± 0.00) slower
```

Ruby 3.0 with cvar cache:

```
Warming up --------------------------------------
            1 module     1.641M i/100ms
          30 modules     1.655M i/100ms
         100 modules     1.620M i/100ms
Calculating -------------------------------------
            1 module     16.279M (± 3.8%) i/s -     82.038M in   5.046923s
          30 modules     15.891M (± 3.9%) i/s -     79.459M in   5.007958s
         100 modules     16.087M (± 3.6%) i/s -     81.005M in   5.041931s

Comparison:
            1 module: 16279458.0 i/s
         100 modules: 16087484.6 i/s - same-ish: difference falls within error
          30 modules: 15891406.2 i/s - same-ish: difference falls within error
```

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-06-18 10:02:44 -07:00
Peter Zhu c639b58823 Refactor heap_set_increment
heap_set_increment essentially only calls heap_allocatable_pages_set.
They only differ in behaviour when `additional_pages == 0`. However,
this is only possible because heap_extend_pages may return 0. This
commit also changes heap_extend_pages to always return at least 1.
2021-06-17 10:58:48 -04:00
Nobuyoshi Nakada e4f891ce8d
Adjust styles [ci skip]
* --braces-after-func-def-line
* --dont-cuddle-else
* --procnames-start-lines
* --space-after-for
* --space-after-if
* --space-after-while
2021-06-17 10:13:40 +09:00
Nobuyoshi Nakada 9ab6d39a66
Added parentheses to silence sizeof-array-div warnings
As well as 2366c68116.
2021-06-13 15:12:45 +09:00
Nobuyoshi Nakada 9ec6c83c97
Removed duplicate include 2021-06-13 15:12:45 +09:00
Peter Zhu 929cc615a7 Finish GC before calling gc_set_initial_pages
If we are during incremental sweeping when calling gc_set_initial_pages
there is an assertion error. The following patch will artificially
produce the bug:

```
diff --git a/gc.c b/gc.c
index c3157dbe2c..d7282cf8f0 100644
--- a/gc.c
+++ b/gc.c
@@ -404,7 +404,7 @@ int ruby_rgengc_debug;
  * 5: show all references
  */
 #ifndef RGENGC_CHECK_MODE
-#define RGENGC_CHECK_MODE  0
+#define RGENGC_CHECK_MODE  1
 #endif
 // Note: using RUBY_ASSERT_WHEN() extend a macro in expr (info by nobu).
@@ -10821,6 +10821,10 @@ gc_set_initial_pages(void)
 void
 ruby_gc_set_params(void)
 {
+    for (int i = 0; i < 10000; i++) {
+        rb_ary_new();
+    }
+
     /* RUBY_GC_HEAP_FREE_SLOTS */
     if (get_envparam_size("RUBY_GC_HEAP_FREE_SLOTS", &gc_params.heap_free_slots, 0)) {
        /* ok */
```

The crash looks like:

```
Assertion Failed: ../gc.c:2038:heap_add_page:!(heap == heap_eden && heap->sweeping_page)
```
2021-06-10 10:59:32 -04:00
Peter Zhu 8a46b480a7 Refactor gc_marks_start_heap to only configure heap
Move the non-heap related configurations to gc_marks_start.
2021-06-09 14:16:39 -04:00
Peter Zhu 9f110ced57 Add multi-heap support to gc_marks_wb_unprotected_objects 2021-06-08 14:31:38 -04:00
Aaron Patterson 38c5f2737f
Support an arbitrary number of header bits (< BITS_BITLENGTH)
NUM_IN_PAGE(page->start) will sometimes return a 0 or a 1 depending on
how the alignment of the 40 byte slots work out.  This commit uses the
NUM_IN_PAGE function to shift the bitmap down on the first bitmap plane.
Iterating on the first bitmap plane is "special", but this commit allows
us to align object addresses on something besides 40 bytes, and also
eliminates the need to fill guard bits.
2021-06-03 13:56:53 -07:00
Aaron Patterson bc65cf1a92
use a bool instead of int 2021-06-02 14:13:34 -07:00
Peter Zhu ad734a8cc3 Allocate exact space for objspace_each_objects
We are only iterating over the eden heap so `heap_eden->total_pages`
contains the exact number of pages we need to allocate for.
`heap_allocated_pages` may contain pages in the tomb.
2021-06-02 15:49:32 -04:00
Aaron Patterson f9b9d1c580
Use the current object as the compaction index
Instead of keeping track of the current bit plane, keep track of the
actual slot when compacting.  This means we don't need to re-scan
objects inside the same bit plane when we continue with movement
2021-06-01 15:25:08 -07:00
Aaron Patterson 8fdb15fdd3 Fill out switch statement in push_mark_stack
When objects are popped from the mark stack, we check that the object is
the right type (otherwise an rb_bug happens).  The problem is that when
we pop a bad object from the stack, we have no idea what pushed the bad
object on the stack.

This change makes an error happen when a bad object is pushed on the
mark stack, that way we can track down the source of the bug.
2021-05-26 14:21:54 -07:00
Aaron Patterson fc832ffbfa Disable compaction on platforms that can't support it
Manual compaction also requires a read barrier, so we need to disable
even manual compaction on platforms that don't support mprotect.

[Bug #17871]
2021-05-25 17:37:21 -07:00
Aaron Patterson e4e416380d Revert any references that are on the machine stack after compacting
Since compaction can be concurrent, the machine stack is allowed to
change while compaction is happening.  When compaction finishes, there
may be references on the machine stack that need to be reverted so that
we can remove the read barrier.
2021-05-18 14:53:07 -07:00
Nobuyoshi Nakada adafa8105f
PAGE_SIZE is never used on msys/mingw 2021-05-16 18:27:47 +09:00
Nobuyoshi Nakada 7cf90f99f5
Refix PAGE_SIZE
* honor actually used headers
* include sys/user.h only when `PAGE_SIZE` is not defined
2021-05-14 09:33:20 +09:00
Nobuyoshi Nakada a168c47728
Make USE_MMAP_ALIGNED_ALLOC static const 2021-05-14 09:31:09 +09:00
Koichi Sasada 2420119f47 skip rb_bug for inconsistent zombies count
It seems a bug but it takes more time to debug.
To stop CI failures, skip this rb_bug on
`RGENGC_CHECK_MODE=2` temporarily.
2021-05-13 18:19:28 +09:00
Aaron Patterson 07f055bb13
Revert "Filling cache values on cvar write"
This reverts commit 08de37f9fa.
This reverts commit e8ae922b62.
2021-05-11 13:31:00 -07:00
eileencodes e8ae922b62 Add a cache for class variables
This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.

The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.

```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105ca45) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be0093ae) [x86_64-darwin19]

|         |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar  |      5.681M|   36.980M|
|         |           -|     6.51x|
```

Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.

Benchmark code:

```ruby
require "benchmark/ips"
require_relative "config/environment"

Benchmark.ips do |x|
  x.report "logger" do
    ActiveRecord::Base.logger
  end
end
```

Ruby 3.0 master / Rails 6.1:

```
Warming up --------------------------------------
              logger   155.251k i/100ms
Calculating -------------------------------------
```

Ruby 3.0 with cvar cache /  Rails 6.1:

```
Warming up --------------------------------------
              logger     1.546M i/100ms
Calculating -------------------------------------
              logger     14.857M (± 4.8%) i/s -     74.198M in   5.006202s
```

Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.

Ruby 3.0 master:

```
Warming up --------------------------------------
            1 module     1.231M i/100ms
          30 modules   432.020k i/100ms
         100 modules   145.399k i/100ms
Calculating -------------------------------------
            1 module     12.210M (± 2.1%) i/s -     61.553M in   5.043400s
          30 modules      4.354M (± 2.7%) i/s -     22.033M in   5.063839s
         100 modules      1.434M (± 2.9%) i/s -      7.270M in   5.072531s

Comparison:
            1 module: 12209958.3 i/s
          30 modules:  4354217.8 i/s - 2.80x  (± 0.00) slower
         100 modules:  1434447.3 i/s - 8.51x  (± 0.00) slower
```

Ruby 3.0 with cvar cache:

```
Warming up --------------------------------------
            1 module     1.641M i/100ms
          30 modules     1.655M i/100ms
         100 modules     1.620M i/100ms
Calculating -------------------------------------
            1 module     16.279M (± 3.8%) i/s -     82.038M in   5.046923s
          30 modules     15.891M (± 3.9%) i/s -     79.459M in   5.007958s
         100 modules     16.087M (± 3.6%) i/s -     81.005M in   5.041931s

Comparison:
            1 module: 16279458.0 i/s
         100 modules: 16087484.6 i/s - same-ish: difference falls within error
          30 modules: 15891406.2 i/s - same-ish: difference falls within error
```

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-05-11 12:04:27 -07:00
Nobuyoshi Nakada 0bbab1e515
Protoized old pre-ANSI K&R style declarations and definitions 2021-05-07 00:04:36 +09:00
Nobuyoshi Nakada 99644514db
Conditionally used functions 2021-05-06 23:53:26 +09:00
Matt Valentine-House 8bbd319806 Allow newobj_of0 and newobj_slowpath to allocate into multiple heap slots 2021-05-06 09:18:17 -04:00
Nobuyoshi Nakada f941dd5a9f
Reuse sysconf result 2021-05-06 12:10:58 +09:00
Nobuyoshi Nakada 0dd9ac7721
Revised PAGE_MAX_SIZE case 2021-05-06 11:39:58 +09:00
Peter Zhu b655a3fa5b Fall back to sysconf to determine page size during runtime
On some platforms the PAGE_SIZE macro does not exist so we can fall back
to `sysconf` to determine the page size at runtime.
2021-05-05 17:54:26 -04:00
Peter Zhu 23a98237df Fix PAGE_SIZE macro detection in autoconf
The current fix for PAGE_SIZE macro detection in autoconf does not work
correctly. I see the following output with running configure on Linux:

```
checking PAGE_SIZE is defined... no
```

Linux has PAGE_SIZE macro. This is happening because the macro exists in
sys/user.h and not in the malloc headers.
2021-05-05 15:31:30 -04:00
Nobuyoshi Nakada 1921500511
PAGE_SIZE is used only when mmap is available 2021-05-06 01:01:48 +09:00
Nobuyoshi Nakada 3d5b6ddff8
Fix compilation on M1 Mac
As PAGE_SIZE may not be a preprocessor constant, dispatch at
runtime in that case.
2021-05-05 23:54:36 +09:00
Benoit Daloze 0764d323d8 Fix -Wundef warnings for patterns `#if HAVE`
* See [Feature #17752]
* Using this to detect them:
  git grep -P 'if\s+HAVE' | grep -Pv 'HAVE_LONG_LONG|/ChangeLog|HAVE_TYPEOF'
2021-05-04 14:56:55 +02:00
Benoit Daloze 68d6bd0873 Fix trivial -Wundef warnings
* See [Feature #17752]

Co-authored-by: xtkoba (Tee KOBAYASHI) <xtkoba+ruby@gmail.com>
2021-05-04 14:56:55 +02:00
Aaron Patterson 9a6226c61e Eagerly allocate instance variable tables along with object
This allows us to allocate the right size for the object in advance,
meaning that we don't have to pay the cost of ivar table extension
later.  The idea is that if an object type ever became "extended" at
some point, then it is very likely it will become extended again.  So we
may as well allocate the ivar table up front.
2021-05-03 14:11:48 -07:00
Yusuke Endoh e48109d86f Partially revert 2c7d3b3a72
to make imemo_ast WB-protected again. Only the test is kept.
2021-04-27 17:05:19 +09:00
Yusuke Endoh 2c7d3b3a72 node.c (rb_ast_new): imemo_ast is WB-unprotected
Previously imemo_ast was handled as WB-protected which caused a segfault
of the following code:

    # shareable_constant_value: literal
    M0 = {}
    M1 = {}
    ...
    M100000 = {}

My analysis is here: `shareable_constant_value: literal` creates many
Hash instances during parsing, and add them to node_buffer of imemo_ast.
However, the contents are missed because imemo_ast is incorrectly
WB-protected.

This changeset makes imemo_ast as WB-unprotected.
2021-04-26 22:46:51 +09:00
Ryuta Kamizono 33f2ff3bab Fix some typos by spell checker 2021-04-26 10:07:41 +09:00
Aaron Patterson 32643cfb1d check ep during compaction because it can be null
This commit adds a check on the ep just like in the mark function.  The
env can contain null bytes if allocation tracing is enabled.

We're seeing errors during autocompaction like this:

```
(lldb) bt 40
* thread #1, name = 'ruby', stop reason = signal SIGABRT
    frame #0: 0x00007f7d64b6018b libc.so.6`raise + 203
    frame #1: 0x00007f7d64b3f859 libc.so.6`abort + 299
    frame #2: 0x000055af5f2fefc9 ruby`die at error.c:764:5
    frame #3: 0x000055af5f2ff1ac ruby`rb_bug_for_fatal_signal(default_sighandler=0x0000000000000000, sig=11, ctx=0x000055af60bc3340, fmt="") at error.c:804:5
    frame #4: 0x000055af5f4bd08f ruby`sigsegv(sig=11, info=0x000055af60bc3470, ctx=0x000055af60bc3340) at signal.c:960:5
    frame #5: 0x00007f7d64ebe3c0 libpthread.so.0`__restore_rt
    frame #6: 0x000055af5f339b0a ruby`gc_ref_update_imemo(objspace=0x000055af60b2b040, obj=0x00007f7d5b513fd0) at gc.c:9046:13
    frame #7: 0x000055af5f339172 ruby`gc_update_object_references(objspace=0x000055af60b2b040, obj=0x00007f7d5b513fd0) at gc.c:9307:9
    frame #8: 0x000055af5f338e79 ruby`gc_ref_update(vstart=0x00007f7d5b510010, vend=0x00007f7d5b513ff8, stride=40, objspace=0x000055af60b2b040, page=0x000055af62577aa0) at gc.c:9452:21
    frame #9: 0x000055af5f337846 ruby`gc_update_references(objspace=0x000055af60b2b040, heap=0x000055af60b2b068) at gc.c:9481:9
    frame #10: 0x000055af5f336569 ruby`gc_compact_finish(objspace=0x000055af60b2b040, heap=0x000055af60b2b068) at gc.c:4840:5
    frame #11: 0x000055af5f335efb ruby`gc_page_sweep(objspace=0x000055af60b2b040, heap=0x000055af60b2b068, sweep_page=0x000055af63a1eb30) at gc.c:5046:13
    frame #12: 0x000055af5f3355c5 ruby`gc_sweep_step(objspace=0x000055af60b2b040, heap=0x000055af60b2b068) at gc.c:5214:19
    frame #13: 0x000055af5f33daf6 ruby`gc_sweep_rest(objspace=0x000055af60b2b040) at gc.c:5271:2
    frame #14: 0x000055af5f33cacd ruby`gc_sweep(objspace=0x000055af60b2b040) at gc.c:5389:2
    frame #15: 0x000055af5f33c21d ruby`gc_marks_rest(objspace=0x000055af60b2b040) at gc.c:7555:5
    frame #16: 0x000055af5f324d41 ruby`gc_rest(objspace=0x000055af60b2b040) at gc.c:8457:13
    frame #17: 0x000055af5f3297d8 ruby`garbage_collect(objspace=0x000055af60b2b040, reason=45568) at gc.c:8318:9
    frame #18: 0x000055af5f344ece ruby`garbage_collect_with_gvl(objspace=0x000055af60b2b040, reason=45568) at gc.c:8632:9
    frame #19: 0x000055af5f344e61 ruby`objspace_malloc_gc_stress(objspace=0x000055af60b2b040) at gc.c:10592:9
    frame #20: 0x000055af5f32ced1 ruby`objspace_xmalloc0(objspace=0x000055af60b2b040, size=64) at gc.c:10767:5
    frame #21: 0x000055af5f32ce11 ruby`ruby_xmalloc0(size=64) at gc.c:10988:12
    frame #22: 0x000055af5f32cdac ruby`ruby_xmalloc_body(size=64) at gc.c:10997:12
    frame #23: 0x000055af5f329415 ruby`ruby_xmalloc(size=64) at gc.c:12942:12
    frame #24: 0x00007f7d611c4fe5 objspace.so`newobj_i(tpval=0x00007f7d5b553770, data=0x000055af639031a0) at object_tracing.c:101:35
    frame #25: 0x000055af5f5b283f ruby`tp_call_trace(tpval=0x00007f7d5b553770, trace_arg=0x00007fff1016d398) at vm_trace.c:1115:2
    frame #26: 0x000055af5f5b50ec ruby`exec_hooks_body(ec=0x000055af60b2b700, list=0x000055af60b2b920, trace_arg=0x00007fff1016d398) at vm_trace.c:304:3
    frame #27: 0x000055af5f5b0f24 ruby`exec_hooks_unprotected(ec=0x000055af60b2b700, list=0x000055af60b2b920, trace_arg=0x00007fff1016d398) at vm_trace.c:333:5
    frame #28: 0x000055af5f5b0da8 ruby`rb_exec_event_hooks(trace_arg=0x00007fff1016d398, hooks=0x000055af60b2b920, pop_p=0) at vm_trace.c:378:13
    frame #29: 0x000055af5f33f8e2 ruby`rb_exec_event_hook_orig(ec=0x000055af60b2b700, hooks=0x000055af60b2b920, flag=1048576, self=0x00007f7d5b5c08c0, id=0, called_id=0, klass=0x0000000000000000, data=0x00007f7d5b513fd0, pop_p=0) at vm_core.h:1989:5
    frame #30: 0x000055af5f334975 ruby`gc_event_hook_body(ec=0x000055af60b2b700, objspace=0x000055af60b2b040, event=1048576, data=0x00007f7d5b513fd0) at gc.c:2083:5
  * frame #31: 0x000055af5f3342df ruby`newobj_slowpath_wb_protected [inlined] newobj_slowpath(klass=0x00007f7d5b9d19c8, flags=0x000000000000001a, objspace=0x000055af60b2b040, cr=0x000055af60b2b910, wb_protected=1) at gc.c:2284:9
    frame #32: 0x000055af5f33410f ruby`newobj_slowpath_wb_protected(klass=0x00007f7d5b9d19c8, flags=0x000000000000001a, objspace=0x000055af60b2b040, cr=0x000055af60b2b910) at gc.c:2299
    frame #33: 0x000055af5f333de9 ruby`newobj_of0(klass=0x00007f7d5b9d19c8, flags=0x000000000000001a, wb_protected=1, cr=0x000055af60b2b910) at gc.c:2338:11
    frame #34: 0x000055af5f3227ae ruby`newobj_of(klass=0x00007f7d5b9d19c8, flags=0x000000000000001a, v1=0x000055af657d88a0, v2=0x000055af657d8890, v3=0x0000000000000000, wb_protected=1) at gc.c:2348:17
    frame #35: 0x000055af5f322c5b ruby`rb_imemo_new(type=imemo_env, v1=0x000055af657d88a0, v2=0x000055af657d8890, v3=0x0000000000000000, v0=0x00007f7d5b9d19c8) at gc.c:2434:12
    frame #36: 0x000055af5f5a3925 ruby`vm_env_new(env_ep=0x000055af657d88a0, env_body=0x000055af657d8890, env_size=4, iseq=0x00007f7d5b9d19c8) at vm_core.h:1363:33
    frame #37: 0x000055af5f5a3808 ruby`vm_make_env_each(ec=0x000055af60b2b700, cfp=0x00007f7d6482fc90) at vm.c:801:11
    frame #38: 0x000055af5f5a368d ruby`vm_make_env_each(ec=0x000055af60b2b700, cfp=0x00007f7d6482fc20) at vm.c:752:13
    frame #39: 0x000055af5f5a368d ruby`vm_make_env_each(ec=0x000055af60b2b700, cfp=0x00007f7d6482fbb0) at vm.c:752:13
(lldb) f 31
frame #31: 0x000055af5f3342df ruby`newobj_slowpath_wb_protected [inlined] newobj_slowpath(klass=0x00007f7d5b9d19c8, flags=0x000000000000001a, objspace=0x000055af60b2b040, cr=0x000055af60b2b910, wb_protected=1) at gc.c:2284:9
   2281	        }
   2282	        GC_ASSERT(obj != 0);
   2283	        newobj_init(klass, flags, wb_protected, objspace, obj);
-> 2284	        gc_event_hook_prep(objspace, RUBY_INTERNAL_EVENT_NEWOBJ, obj, newobj_fill(obj, 0, 0, 0));
   2285	    }
   2286	    RB_VM_LOCK_LEAVE_CR_LEV(cr, &lev);
   2287
(lldb) p obj
(VALUE) $3 = 0x00007f7d5b513fd0
(lldb) f 6
frame #6: 0x000055af5f339b0a ruby`gc_ref_update_imemo(objspace=0x000055af60b2b040, obj=0x00007f7d5b513fd0) at gc.c:9046:13
   9043	        {
   9044	            rb_env_t *env = (rb_env_t *)obj;
   9045	            TYPED_UPDATE_IF_MOVED(objspace, rb_iseq_t *, env->iseq);
-> 9046	            UPDATE_IF_MOVED(objspace, env->ep[VM_ENV_DATA_INDEX_ENV]);
   9047	            gc_update_values(objspace, (long)env->env_size, (VALUE *)env->env);
   9048	        }
   9049	        break;
(lldb) p obj
(VALUE) $4 = 0x00007f7d5b513fd0
(lldb)
```
2021-04-20 09:54:09 -07:00
Peter Zhu f1f08f5b69 Remove useless attribute set in init_mark_stack
init_mark_stack already clears the mark stack so we do not need to
set the attribute cache_size to zero.
2021-04-15 10:10:23 -04:00
Peter Zhu 4eefb05725 Add RSymbol struct back into RVALUE
Commit 0ca714fa1a removed RSymbol from
RVALUE. This commit adds RSymbol back into RVALUE.
2021-04-13 09:37:50 -04:00
Nobuyoshi Nakada 9513fcd5bc
Suppress a warning
Loop variables of `list_for_each` need to be initialized.
2021-04-01 22:54:42 +09:00
Koichi Sasada 1fac99afda skip marking for uninitialized imemo_env.
RUBY_INTERNAL_EVENT_NEWOBJ can expose uninitialized imemo_env
objects and marking it will cause critical error. This patch
skips marking on uninitialized imemo_env.

See: http://rubyci.s3.amazonaws.com/centos7/ruby-master/log/20210329T183003Z.fail.html.gz

Shortest repro-code is provided by mame-san.
2021-03-31 19:18:32 +09:00
Peter Zhu b25361f731 Change heap walking to be safe for object allocation 2021-03-24 14:31:10 -04:00
Aaron Patterson 417c648f08 Free iv index table
IV index tables weren't being freed.  This program would leak memory:

```ruby
loop do
  k = Class.new do
    def initialize
      @a = 1
      @b = 1
      @c = 1
      @d = 1
      @e = 1
      @f = 1
      @g = 1
    end
  end
  k.new
end
```

This commit fixes the leak.
2021-03-23 13:10:25 -07:00
S.H 71ba09632b
Remove unneeded declarations 2021-03-20 21:00:29 +09:00
Yusuke Endoh c576e63ee7 gc.c: Use dedicated APIs for conservative GC in Emscripten
Emscripten provides "emscripten_scan_stack" to get the beginning and end
pointers of the stack for conservative GC.
Also, "emscripten_scan_registers" allows the GC to mark local variables
in WASM.
2021-03-19 12:35:48 +09:00
Nobuyoshi Nakada 90c12defb3
Constified variables for getenv 2021-03-12 17:13:53 +09:00
Peter Zhu 0bd1bc559f Don't use mmap on platforms that have large OS page sizes 2021-03-02 10:04:49 -08:00
Peter Zhu 6d834371c0 Fix typo 2021-03-02 10:04:49 -08:00
Peter Zhu 1c0e79e87b Disable auto compaction on platforms that do not support it 2021-02-25 11:01:50 -08:00
Peter Zhu 1e13548953 Use mmap for allocating heap pages 2021-02-25 11:01:50 -08:00
Aaron Patterson 08d5db4064
Reverting PR #4221
It seems this breaks tests on Solaris, so I'm reverting it until we
figure out the right fix.

  http://rubyci.s3.amazonaws.com/solaris11-sunc/ruby-master/log/20210224T210007Z.fail.html.gz
2021-02-24 13:44:10 -08:00
Peter Zhu a80366c922 Disable auto compaction on platforms that do not support it 2021-02-24 12:25:30 -08:00
Peter Zhu 785f5eb8f0 Use mmap for allocating heap pages 2021-02-24 12:25:30 -08:00
Koichi Sasada d260cbe295 show more information about imemo_ment
rb_obj_info(obj) (rp(obj)) doesn't show enough information for
non-iseq methods, so this patch shows more.
2021-02-19 16:54:31 +09:00
Koichi Sasada 07ab172ebe sync check_rvalue_consistency_force()
check_rvalue_consistency_force() uses is_pointer_to_heap() and
it should be synchronized with other ractors.
[Bug #17636]
2021-02-18 17:04:59 +09:00
Koichi Sasada 100e464bee clear RVALUE on NEWOBJ event.
NEWOBJ event is called without clearing RVALUE values (v1, v2, v3).
This patch clear them before NEWOBJ tracepoint internal hook.
[Bug #17599]
2021-02-18 17:04:23 +09:00
Koichi Sasada 969b824a0c sync GC rest if needed
marking requires a barrier (stop all Ractors) and gc_enter() does it.
However, it doesn't check rest event which can start marking.
[Bug #17599]
2021-02-18 16:40:59 +09:00
Nobuyoshi Nakada 42a16e5974
Removed no-longer used variable 2021-02-17 20:15:05 +09:00
Peter Zhu 33b8bd97a8 Remove unreachable if statement in gc_page_sweep
This if statement is not reachable because `was_compacting` cannot be true when `heap->compact_cursor` is NULL.
2021-02-16 15:07:59 -08:00
Aaron Patterson 75b96c3a05 Don't register non-heap allocated objects
`rb_define_const` can add objects as "mark objects".  This is to make
code like this work:

  33d6e92e0c/ext/etc/etc.c (L1201)

```
    rb_define_const(rb_cStruct, "Passwd", sPasswd); /* deprecated name */
```

sPasswd is a heap allocated object that is also a C global, so we can't
move it (it needs to be pinned).  However, we have many calls to
`rb_define_const` that just pass in an integer like this:

```
rb_define_const(rb_cDBM, "WRITER",  INT2FIX(O_RDWR|RUBY_DBM_RW_BIT));
```

Non heap allocated objects like integers will never move, so there is no
reason to waste time in the GC marking / pinning them.
2021-02-04 09:49:00 -08:00
Matt Valentine-House e3ef21c307 Use RCLASS_EXT macro instead of directly accessing ptr 2021-02-01 08:42:54 -08:00
Matt Valentine-House e0f999a2ed Add RCLASS_SUBCLASSES Macro 2021-02-01 08:42:54 -08:00
Nobuyoshi Nakada e44870c225
Removed static assertion about size of `RVALUE`
It is unable where unaligned word access is disallowed and
`double` is wider than pointers.
2021-01-31 17:45:35 +09:00
Nobuyoshi Nakada ae0a179c4b
Narrowed down the condition to pack RValue
Because of `double` in `RFloat`, `RValue` would be packed by
`sizeof(double)` by default, on platforms where `double` is wider
than `VALUE`.  Size of `RValue` is multiple of 5 now.
2021-01-31 13:20:15 +09:00
Peter Zhu d2ffd269a7 [Fixes #17538] Fix assertion failure when rincgc is turned off
Co-Authored-By: Matt Valentine-House <31869+eightbitraptor@users.noreply.github.com>
2021-01-27 16:17:46 -08:00
Matt Valentine-House 8a3f816675 Re-enable RGENGC_DEBUG for platforms with HAVE_VA_ARGS_MACRO
after this commit turned it off globally.

888cf28a7e

Co-authored-by: peterzhu2118 <peter@peterzhu.ca>
2021-01-26 08:18:44 -08:00
Matt Valentine-House 479e4d13cb Fix RGENGC CHECK MODE >= 4
[A previous commit](b59077eecf) removes some macro definitions that are used when RGENGC_CHECK_MODE >=4 because they were using data stored against objspace, which is not ractor safe

This commit reinstates those macro definitions, using the current ractor

Co-authored-by: peterzhu2118 <peter@peterzhu.ca>
2021-01-26 08:17:58 -08:00
Yusuke Endoh 47d6c55755 gc.c: stop overflow check on emscripten build 2021-01-23 10:11:50 +09:00
Koichi Sasada e586345b77 check is_incremental_marking() again
is_incremental_marking() can be changed after checking the
flag without locking, especially on `GC.stress = true`.
2021-01-22 18:15:57 +09:00
Aaron Patterson 32b7dcfb56
Fix more assumptions about the read barrier
This is a continuation of 0130e17a41.  We
need to always use the read barrier
2021-01-21 11:19:44 -08:00
Aaron Patterson 0130e17a41
Always enabled read barrier even on GC.compact
Some objects can survive the GC before compaction, but get collected in
the second compaction.  This means we could have objects reference
T_MOVED during "free" in the second, compacting GC.  If that is the
case, we need to invalidate those "moved" addresses.  Invalidation is
done via read barrier, so we need to make sure the read barrier is
active even during `GC.compact`.

This also means we don't actually need to do one GC before compaction,
we can just do the compaction and GC in one step.
2021-01-21 09:55:18 -08:00
Aaron Patterson 589a8026f0 fix ASAN errors 2021-01-13 14:53:45 -08:00
David CARLIER 161a20df28 gc fix typo for the timer instruction for ARM64. 2021-01-09 22:37:27 +09:00
Koichi Sasada 442bd0e92c show more info about imemo_callcache 2021-01-06 14:57:48 +09:00
Marcus Stollsteimer 3108ad7bf3 [DOC] Fix grammar: "is same as" -> "is the same as" 2021-01-05 15:13:53 +01:00
Koichi Sasada e7fc353f04 enable constant cache on ractors
constant cache `IC` is accessed by non-atomic manner and there are
thread-safety issues, so Ruby 3.0 disables to use const cache on
non-main ractors.

This patch enables it by introducing `imemo_constcache` and allocates
it by every re-fill of const cache like `imemo_callcache`.
[Bug #17510]

Now `IC` only has one entry `IC::entry` and it points to
`iseq_inline_constant_cache_entry`, managed by T_IMEMO object.

`IC` is atomic data structure so `rb_mjit_before_vm_ic_update()` and
`rb_mjit_after_vm_ic_update()` is not needed.
2021-01-05 02:27:58 +09:00
Takashi Kokubun ac2df89113
Stop managing valid class serials
`mjit_valid_class_serial_p` has no longer been used since b9007b6c54.
2020-12-29 23:01:11 -08:00
Nobuyoshi Nakada 09aca50fc4
Adjusted styles [ci skip] 2020-12-28 19:52:14 +09:00
Nobuyoshi Nakada 292230cbf9 Fixed leaked global symbols 2020-12-26 09:39:53 +09:00
Koichi Sasada 888cf28a7e define RGENGC_DEBUG_ENABLED() as 0
on RUBY_DEVEL==0 and !HAVE_VA_ARGS_MACRO.

gc_report() is always enabled on such configuration
(maybe it is a bug) so disable RGENGC_DEBUG_ENABLED().
2020-12-25 11:20:23 +09:00
Nobuyoshi Nakada 313d63c2ac
Use rb_init_identtable instead of direct use of rb_hashtype_ident 2020-12-23 18:06:27 +09:00
Koichi Sasada 02d9524cda separate rb_ractor_pub from rb_ractor_t
separate some fields from rb_ractor_t to rb_ractor_pub and put it
at the beggining of rb_ractor_t and declare it in vm_core.h so
vm_core.h can access rb_ractor_pub fields.

Now rb_ec_ractor_hooks() is a complete inline function and no
MJIT related issue.
2020-12-22 00:03:00 +09:00
Koichi Sasada 74ab2c3b46 finalizing should be checked before VM lock 2020-12-18 17:59:26 +09:00
Nobuyoshi Nakada 7d32bf7853
Removed a moved local variable 2020-12-18 17:56:08 +09:00
Koichi Sasada 61236770e6 need to sync gc_finalize_deferred
gc_finalize_deferred() runs finalizers and it accesses objspace,
so it need to sync.
2020-12-18 17:50:01 +09:00
Nobuyoshi Nakada 75416b8628 Removed old GC.stat keys deprecated since 2.2 2020-12-18 16:27:43 +09:00
Nobuyoshi Nakada 763d5f9c6b Removed old GC tuning environment variables deprecated since 2.1 2020-12-18 16:27:43 +09:00
Koichi Sasada cfa124ef05 acquire VM lock on gc_verify_internal_consistency()
There is a case to call this function without VM lock acquiring.
2020-12-18 14:16:06 +09:00
Koichi Sasada 29e42b8bfd add explicit check
To debug this issue:
https://rubyci.org/logs/rubyci.s3.amazonaws.com/solaris10-gcc/ruby-master/log/20201217T220004Z.fail.html.gz
2020-12-18 08:26:25 +09:00
Koichi Sasada 6538c89f1c gc_verify_internal_consistency() needs barrier
gc_verify_internal_consistency() accesses all slots (objects) so
all ractors should stop before starting this function.
2020-12-18 01:20:02 +09:00
Koichi Sasada da3438a504 sync obj_to_id_tbl
objspace->obj_to_id_tbl is a shared table so we need to synchronize
it to access.
2020-12-17 18:13:40 +09:00
Koichi Sasada 7f11c8086a reduce barrier counts for GC events
mark needs barrier (stop other ractors), but other GC events don't need
barriers (maybe...).
2020-12-17 18:13:26 +09:00
Koichi Sasada 99b9145380 relax synchronization on WB
Current synchronization is too much on write barriers.
2020-12-17 17:37:52 +09:00
Koichi Sasada c42948d784 add debug counters for gc start events 2020-12-17 17:03:05 +09:00
Koichi Sasada 93ba3ac036 RGENGC_PROFILE=0
Enabled this flag, maybe accidentally.
2020-12-17 03:46:44 +09:00
Nobuyoshi Nakada 289d42c932
Removed unneeded cast and use the real name 2020-12-15 23:43:08 +09:00
Koichi Sasada 974e89ae07 revert da3bca513f
It seems introduce critical problems. Now I could not find
out the issue.

http://ci.rvm.jp/results/trunk-test@ruby-sky1/3286048
2020-12-11 09:54:34 +09:00
Koichi Sasada 72f1c43584 ObjectSpace._id2ref should not support unshareable
ObjectSpace._id2ref(id) can return any objects even if they are
unshareable, so this patch raises RangeError if it runs on multi-ractor
mode and the found object is unshareable.
2020-12-10 18:27:44 +09:00
Nobuyoshi Nakada 142f154a0a
Unpoison freelist to chain 2020-12-10 18:16:22 +09:00
Koichi Sasada da3bca513f cache free pages per ractor
Per ractor method cache (GH-#3842) only cached 1 page and this patch
caches several pages to keep at least 512 free slots if available.
If you increase the number of cached free slots, all cached slots
will be collected when the GC is invoked.
2020-12-10 13:05:43 +09:00
Koichi Sasada 554c094977 set min/maximum free slots relative to ractor cnt
A program with multiple ractors can consume more objects per
unit time, so this patch set minimum/maximum free_slots to
relative to ractors count (upto 8).
2020-12-10 13:05:43 +09:00
Koichi Sasada eafe000af3 lazy sweep tries to collect 2048 slots
Lazy sweep tries to collect free (unused) slots incrementally, and
it only collect a few pages. This patch makes lazy sweep collects
more objects (at least 2048 objects) and GC overhead of multi-ractor
execution will be reduced.
2020-12-10 13:05:43 +09:00
Koichi Sasada 45b29754cf need the lock for debug checking.
Checking code (RGENGC_CHECK_MODE > 0) need a VM lock because it
refers objspace.
2020-12-09 15:15:46 +09:00
Koichi Sasada 1ba05f5b2d need more lock in finalize_list()
Some data should be accessed in parallel so they should be protected
by the lock.
2020-12-07 13:32:50 +09:00
Koichi Sasada 0ebf6bd0a2 RB_VM_LOCK_ENTER_NO_BARRIER
Write barrier requires VM lock because it accesses VM global bitmap
but RB_VM_LOCK_ENTER() can invoke GC because another ractor can wait
to invoke GC and RB_VM_LOCK_ENTER() is barrier point. This means that
before protecting by a write barrier, GC can invoke.
To prevent such situation, RB_VM_LOCK_ENTER_NO_BARRIER() is introduced.
This lock primitive does not become GC barrier points.
2020-12-07 11:27:25 +09:00
Koichi Sasada 8dd03e5cf0 skip assertion on multi-ractor
This assertion is not considerred on multi-ractor mdoe.
2020-12-07 11:10:18 +09:00
Koichi Sasada 59ddb88da6 RB_EC_NEWOBJ_OF
NEWOBJ with current ec.
2020-12-07 08:28:36 +09:00
Koichi Sasada 91d99025e4 per-ractor object allocation
Now object allocation requires VM global lock to synchronize objspace.
However, of course, it introduces huge overhead.
This patch caches some slots (in a page) by each ractor and use cached
slots for object allocation. If there is no cached slots, acquire the global lock
and get new cached slots, or start GC (marking or lazy sweeping).
2020-12-07 08:28:36 +09:00
Aaron Patterson a9d773a288
Revert "Skip repeated scan of object during compaction"
This seems to be breaking the build for some reason.

This command can reproduce it:

`make yes-test-all TESTS=--repeat-count=20`

This reverts commit 88bb1a672c.
2020-12-03 17:19:15 -08:00
Peter Zhu 88bb1a672c Skip repeated scan of object during compaction 2020-12-03 11:58:05 -08:00
Aaron Patterson 51268be7fe When allocating new pages, add them to the end of the linked list
When we allocate new pages, allocate them on the end of the linked list.
Then when we compact we can move things to the head of the list
2020-12-02 10:47:10 -08:00
Aaron Patterson 0bebea985d Incremental sweeping should not require page allocation
Incremental sweeping should sweep until we find a slot for objects to
use.  `heap_increment` was adding a page to the heap even though we
would sweep immediately after.

Co-authored-by: John Hawthorn <john@hawthorn.email>
2020-12-02 08:23:31 -08:00
Koichi Sasada d2cfb5228a show with sharing info 2020-12-01 11:10:19 +09:00
Koichi Sasada 67693d8d80 ractor local storage C-API
To manage ractor-local data for C extension, the following APIs
are defined.

* rb_ractor_local_storage_value_newkey
* rb_ractor_local_storage_value
* rb_ractor_local_storage_value_set
* rb_ractor_local_storage_ptr_newkey
* rb_ractor_local_storage_ptr
* rb_ractor_local_storage_ptr_set

At first, you need to create a key of storage by
rb_ractor_local_(value|ptr)_newkey().
For ptr storage, it accepts the type of storage,
how to mark and how to free with ractor's lifetime.

rb_ractor_local_storage_value/set are used to access a VALUE
and rb_ractor_local_storage_ptr/set are used to access a pointer.

random.c uses this API.
2020-12-01 09:39:30 +09:00
Koichi Sasada 77936ad679 support SIGSEGV/BUS while read_barrier_handler()
read_barrier_handler() can cause SIGSEGV/BUS so it should show
the errors.
2020-11-30 05:10:48 +09:00
Takashi Kokubun 69e77e81dc
Run rb_print_backtrace first on ruby_on_ci
Unfortunately we couldn't see a C backtrace with the previous commit
http://ci.rvm.jp/results/trunk-random2@phosphorus-docker/3272697.
2020-11-26 20:37:47 -08:00
Takashi Kokubun 4dbf6f1e51
Call rb_bug_without_die on CI
when GC.compact's SEGV handler is installed
2020-11-26 20:09:57 -08:00
Aaron Patterson c32218de1b
Disable auto compaction on platforms that can't support it
Both explicit compaction routines (gc_compact and the verify references form)
need to clear the heap before executing compaction.  Otherwise some
objects may not be alive, and we'll need the read barrier.  The heap
must only contain *live* objects if we want to disable the read barrier
during explicit compaction.

The previous commit was missing the "clear the heap" phase from the
"verify references" explicit compaction function.

Fixes [Bug #17306]
2020-11-25 11:29:14 -08:00
Aaron Patterson fed67fe6b2
Revert "Disable auto compaction on platforms that can't support it"
This reverts commit 63ad55cd88.

Revert "Disable read barrier on explicit compaction request"

This reverts commit 490b57783d.
2020-11-24 21:30:13 -08:00
Aaron Patterson 63ad55cd88
Disable auto compaction on platforms that can't support it
Auto Compaction uses mprotect to implement a read barrier.  mprotect can
only work on regions of memory that are a multiple of the OS page size.
Ruby's pages are a multiple of 4kb, but some platforms (like ppc64le)
don't have 4kb page sizes.  This commit disables the features on those
platforms.

Fixes [Bug #17306]
2020-11-24 14:48:19 -08:00
Aaron Patterson 87d21ee996
add HEAP_PAGE_SIZE to internal constants 2020-11-24 13:30:26 -08:00
Aaron Patterson 490b57783d Disable read barrier on explicit compaction request
We don't need a read barrier when the user calls `GC.compact` because we
don't allow allocations during GC, and all references should be "live"
2020-11-24 12:38:05 -08:00
Koichi Sasada 5e3259ea74 fix public interface
To make some kind of Ractor related extensions, some functions
should be exposed.

* include/ruby/thread_native.h
  * rb_native_mutex_*
  * rb_native_cond_*
* include/ruby/ractor.h
  * RB_OBJ_SHAREABLE_P(obj)
  * rb_ractor_shareable_p(obj)
  * rb_ractor_std*()
  * rb_cRactor

and rm ractor_pub.h
and rename srcdir/ractor.h to srcdir/ractor_core.h
    (to avoid conflict with include/ruby/ractor.h)
2020-11-18 03:52:41 +09:00
Aaron Patterson 6d17c9fa5d
gc_rest can change the total pages, so we need to do that first 2020-11-05 12:28:50 -08:00
Aaron Patterson d8da5c1983
add asserts to find crash 2020-11-05 12:27:09 -08:00
Aaron Patterson ab5f2fa4fb
Refactor verification method
Combine everything in to one C function
2020-11-05 11:13:04 -08:00
Aaron Patterson 68a3a2d90f
take VM lock when mutating the heap 2020-11-05 08:51:40 -08:00
Aaron Patterson a8581ce673 ensure T_OBJECT objects have internals initialized 2020-11-04 14:40:50 -08:00
Aaron Patterson 67b2c21c32
Add `GC.auto_compact= true/false` and `GC.auto_compact`
* `GC.auto_compact=`, `GC.auto_compact` can be used to control when
  compaction runs.  Setting `auto_compact=` to true will cause
  compaction to occurr duing major collections.  At the moment,
  compaction adds significant overhead to major collections, so please
  test first!

[Feature #17176]
2020-11-02 14:42:48 -08:00
Koichi Sasada db7a3b63ba suppport Ractor.send(move: true) for more deta
This patch allows to move more data types.
2020-11-02 01:37:28 +09:00
Aaron Patterson d8b0f1f7a8
Objects are born embedded, so we don't need to check ivpr
It's not necessary to check ivpt because objects are allocated as
"embedded" by default
2020-10-28 16:11:30 -07:00
Aaron Patterson a99f52d511
Remove another unnecessary test
Same as 5be42c1ef4
2020-10-28 10:16:57 -07:00
Aaron Patterson 5be42c1ef4
Remove unnecessary conditional
As of 0b81a484f3, `ROBJECT_IVPTR` will
always return a value, so we don't need to test whether or not we got
one.  T_OBJECTs always come to life as embedded objects, so they will
return an ivptr, and when they become "unembedded" they will have an
ivptr at that point too
2020-10-28 09:57:44 -07:00
Aaron Patterson 2c19c1484a
If an object isn't embedded it will have an ivptr
We don't need to check the existence if an ivptr because non-embedded
objects will always have one
2020-10-28 09:45:22 -07:00
Aaron Patterson abf678a439 Use a lock level for a less granular lock.
We are seeing an error where code that is generated with MJIT contains
references to objects that have been moved.  I believe this is due to a
race condition in the compaction function.

`gc_compact` has two steps:

1. Run a full GC to pin objects
2. Compact / update references

Step one is executed with `garbage_collect`.  `garbage_collect` calls
`gc_enter` / `gc_exit`, these functions acquire a JIT lock and release a
JIT lock.  So a lock is held for the duration of step 1.

Step two is executed by `gc_compact_after_gc`.  It also holds a JIT
lock.

I believe the problem is that the JIT is free to execute between step 1
and step 2.  It copies call cache values, but doesn't pin them when it
copies them.  So the compactor thinks it's OK to move the call cache
even though it is not safe.

We need to hold a lock for the duration of `garbage_collect` *and*
`gc_compact_after_gc`.  This patch introduces a lock level which
increments and decrements.  The compaction function can increment and
decrement the lock level and prevent MJIT from executing during both
steps.
2020-10-22 07:59:06 -07:00
Koichi Sasada b59077eecf Ractor-safe rb_objspace_reachable_objects_from
rb_objspace_reachable_objects_from(obj) is used to traverse all
reachable objects from obj. This function modify objspace but it
is not ractor-safe (thread-safe). This patch fix the problem.

Strategy:
(1) call GC mark process during_gc
(2) call Ractor-local custom mark func when !during_gc
2020-10-21 16:15:22 +09:00
Koichi Sasada ade411465d ObjectSpace.each_object with Ractors
Unshareable objects should not be touched from multiple ractors
so ObjectSpace.each_object should be restricted. On multi-ractor
mode, ObjectSpace.each_object only iterates shareable objects.
[Feature #17270]
2020-10-20 15:39:37 +09:00
Koichi Sasada f6661f5085 sync RClass::ext::iv_index_tbl
iv_index_tbl manages instance variable indexes (ID -> index).
This data structure should be synchronized with other ractors
so introduce some VM locks.

This patch also introduced atomic ivar cache used by
set/getinlinecache instructions. To make updating ivar cache (IVC),
we changed iv_index_tbl data structure to manage (ID -> entry)
and an entry points serial and index. IVC points to this entry so
that cache update becomes atomically.
2020-10-17 08:18:04 +09:00
Koichi Sasada 0406898a3f add NULL check.
DATA_PTR(ractor) can be NULL just after creation.
2020-10-03 23:22:17 +09:00
Aaron Patterson d598654c74
Fix ASAN and don't check SPECIAL_CONST_P
Heap allocated objects are never special constants.  Since we're walking
the heap, we know none of these objects can be special.  Also, adding
the object to the freelist will poison the object, so we can't check
that the type is T_NONE after poison.
2020-09-28 09:45:04 -07:00
Aaron Patterson 664eeda66e
Fix ASAN errors when updating call cache
Invalidating call cache walks the heap, so we need to take care to
un-poison objects when examining them
2020-09-28 09:45:04 -07:00
Koichi Sasada 4a588e70b8 sync rb_gc_register_mark_object()
rb_vm_t::mark_object_ary is global resource so we need to
synchronize to access it.
2020-09-24 17:09:12 +09:00
Aaron Patterson f3dddd77a9
Add a comment about why we're checking the finalizer table 2020-09-22 09:20:04 -07:00
Aaron Patterson 8b41e9b6e7
Revert "Pin values in the finalizer table"
If an object has a finalizer flag set on it, prevent it from moving.

This partially reverts commit 1a9dd31910.
2020-09-22 08:57:48 -07:00
Peter Zhu b6d599d76e Update heap_pages_himem after freeing pages 2020-09-20 23:13:47 +09:00
Nobuyoshi Nakada 702cebf104
strip trailing spaces [ci skip] 2020-09-19 17:40:54 +09:00
Aaron Patterson 1a9dd31910 Pin values in the finalizer table
When finalizers run (in `rb_objspace_call_finalizer`) the table is
copied to a linked list that is not managed by the GC.  If compaction
runs, the references in the linked list can go bad.

Finalizer table shouldn't be used frequently, so lets pin references in
the table so that the linked list in `rb_objspace_call_finalizer` is
safe.
2020-09-18 12:31:54 -07:00
Koichi Sasada b189dc6926 rb_obj_info() shows more info for T_SYMBOL 2020-09-18 14:17:49 +09:00
Chris Seaton 8e173d8b27 Warn on a finalizer that captures the object to be finalized
Also improve specs and documentation for finalizers and more clearly
recommend a safe code pattern to use them.
2020-09-16 13:52:24 -07:00
Aaron Patterson 86087a1527 pointers on the stack need to be pinned 2020-09-15 09:09:25 -07:00
Samuel Williams a9b2a96c5c Fix incorrect initialization of `rb_io_t::self`. 2020-09-15 22:53:08 +12:00
Nobuyoshi Nakada d164eef957
Fixed heap-use-after-free on racter 2020-09-04 15:17:42 +09:00
Alan Wu d4585e7470 Avoid potential for rb_raise() while crashing
rb_obj_raw_info is called while printing out crash messages and
sometimes called during garbage collection. Calling rb_raise() in these
situations is undesirable because it can start executing ensure blocks.
2020-09-03 17:41:58 -04:00
Koichi Sasada 79df14c04b Introduce Ractor mechanism for parallel execution
This commit introduces Ractor mechanism to run Ruby program in
parallel. See doc/ractor.md for more details about Ractor.
See ticket [Feature #17100] to see the implementation details
and discussions.

[Feature #17100]

This commit does not complete the implementation. You can find
many bugs on using Ractor. Also the specification will be changed
so that this feature is experimental. You will see a warning when
you make the first Ractor with `Ractor.new`.

I hope this feature can help programmers from thread-safety issues.
2020-09-03 21:11:06 +09:00
John Hawthorn 0b81a484f3 Initialize new T_OBJECT as ROBJECT_EMBED
Previously, when an object is first initialized, ROBJECT_EMBED isn't
set. This means that for brand new objects, ROBJECT_NUMIV(obj) is 0 and
ROBJECT_IV_INDEX_TBL(obj) is NULL.

Previously, this combination meant that the inline cache would never be
initialized when setting an ivar on an object for the first time since
iv_index_tbl was NULL, and if it were it would never be used because
ROBJECT_NUMIV was 0. Both cases always fell through to the generic
rb_ivar_set which would then set the ROBJECT_EMBED flag and initialize
the ivar array.

This commit changes rb_class_allocate_instance to set the ROBJECT_EMBED
flag on the object initially and to initialize all members of the
embedded array to Qundef. This allows the inline cache to be set
correctly on first use and to be used on future uses.

This moves rb_class_allocate_instance to gc.c, so that it has access to
newobj_of. This seems appropriate given that there are other allocating
methods in this file (ex. rb_data_object_wrap, rb_imemo_new).
2020-09-02 14:54:29 -07:00
Peter Zhu 11922b5e03 Fix error message for wb unprotected objects count
This error is about wb unprotected objects, not old objects.
2020-09-01 22:03:13 -04:00
Nobuyoshi Nakada 41cf17bef0
Fixed argument types 2020-09-02 01:41:20 +09:00
Nobuyoshi Nakada f6822e4ed0
Format with proper conversion specifiers instead of casts 2020-09-02 01:41:18 +09:00
Nobuyoshi Nakada 8d1de3154c
Use RSTRING_LENINT for overflow check 2020-09-01 19:03:41 +09:00
Peter Zhu 21ad4075a7
Don't read past the end of the Ruby string
Ruby strings don't always have a null terminator, so we can't use
it as a regular C string. By reading only the first len bytes of
the Ruby string, we won't read past the end of the Ruby string.
2020-09-01 19:01:32 +09:00
卜部昌平 cd1d6d9029 include/ruby/backward/2/r_cast.h: deprecate
Remove all usages of RCAST() so that the header file can be excluded
from ruby/ruby.h's dependency.
2020-08-27 15:03:36 +09:00