Граф коммитов

493 Коммитов

Автор SHA1 Сообщение Дата
John Hawthorn 733500e9d0
Lazily create singletons on instance_{exec,eval} (#5146)
* Lazily create singletons on instance_{exec,eval}

Previously when instance_exec or instance_eval was called on an object,
that object would be given a singleton class so that method
definitions inside the block would be added to the object rather than
its class.

This commit aims to improve performance by delaying the creation of the
singleton class unless/until one is needed for method definition. Most
of the time instance_eval is used without any method definition.

This was implemented by adding a flag to the cref indicating that it
represents a singleton of the object rather than a class itself. In this
case CREF_CLASS returns the object's existing class, but in cases that
we are defining a method (either via definemethod or
VM_SPECIAL_OBJECT_CBASE which is used for undef and alias).

This also happens to fix what I believe is a bug. Previously
instance_eval behaved differently with regards to constant access for
true/false/nil than for all other objects. I don't think this was
intentional.

    String::Foo = "foo"
    "".instance_eval("Foo")   # => "foo"
    Integer::Foo = "foo"
    123.instance_eval("Foo")  # => "foo"
    TrueClass::Foo = "foo"
    true.instance_eval("Foo") # NameError: uninitialized constant Foo

This also slightly changes the error message when trying to define a method
through instance_eval on an object which can't have a singleton class.

Before:

    $ ruby -e '123.instance_eval { def foo; end }'
    -e:1:in `block in <main>': no class/module to add method (TypeError)

After:

    $ ./ruby -e '123.instance_eval { def foo; end }'
    -e:1:in `block in <main>': can't define singleton (TypeError)

IMO this error is a small improvement on the original and better matches
the (both old and new) message when definging a method using `def self.`

    $ ruby -e '123.instance_eval{ def self.foo; end }'
    -e:1:in `block in <main>': can't define singleton (TypeError)

Co-authored-by: Matthew Draper <matthew@trebex.net>

* Remove "under" argument from yield_under

* Move CREF_SINGLETON_SET into vm_cref_new

* Simplify vm_get_const_base

* Fix leaf VM_SPECIAL_OBJECT_CONST_BASE

Co-authored-by: Matthew Draper <matthew@trebex.net>
2021-12-02 15:53:39 -08:00
Jeremy Evans b08dacfea3
Optimize dynamic string interpolation for symbol/true/false/nil/0-9
This provides a significant speedup for symbol, true, false,
nil, and 0-9, class/module, and a small speedup in most other cases.

Speedups (using included benchmarks):
:symbol        :: 60%
0-9            :: 50%
Class/Module   :: 50%
nil/true/false :: 20%
integer        :: 10%
[]             :: 10%
""             :: 3%

One reason this approach is faster is it reduces the number of
VM instructions for each interpolated value.

Initial idea, approach, and benchmarks from Eric Wong. I applied
the same approach against the master branch, updating it to handle
the significant internal changes since this was first proposed 4
years ago (such as CALL_INFO/CALL_CACHE -> CALL_DATA). I also
expanded it to optimize true/false/nil/0-9/class/module, and added
handling of missing methods, refined methods, and RUBY_DEBUG.

This renames the tostring insn to anytostring, and adds an
objtostring insn that implements the optimization. This requires
making a few functions non-static, and adding some non-static
functions.

This disables 4 YJIT tests.  Those tests should be reenabled after
YJIT optimizes the new objtostring insn.

Implements [Feature #13715]

Co-authored-by: Eric Wong <e@80x24.org>
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
Co-authored-by: Yusuke Endoh <mame@ruby-lang.org>
Co-authored-by: Koichi Sasada <ko1@atdot.net>
2021-11-18 15:10:20 -08:00
Eileen M. Uchitelle ea02b93bb9
Refactor setclassvariable (#5143)
We only need the cref when we have a cache miss so don't look it up until we
need it. This likely speeds up class variable writes in the interpreter but
also simplifies the jit code.

Before

```
Warming up --------------------------------------
        write a cvar   192.280k i/100ms
Calculating -------------------------------------
        write a cvar      1.915M (± 3.5%) i/s -      9.614M in   5.026694s
```

After

```
Warming up --------------------------------------
        write a cvar   216.308k i/100ms
Calculating -------------------------------------
        write a cvar      2.140M (± 3.1%) i/s -     10.815M in   5.058079s
```

Followup to ruby/ruby#5137
2021-11-18 16:17:40 -05:00
Eileen M. Uchitelle ec574ab345
Refactor getclassvariable (#5137)
* Refactor getclassvariable

We only need the cref when we have a cache miss so don't look it up until we
need it. This speeds up class variable reads in the interpreter but
also simplifies the jit code.

Benchmarks for master vs this branch (without yjit):

Before:

```
Warming up --------------------------------------
         read a cvar     1.276M i/100ms
Calculating -------------------------------------
         read a cvar     12.596M (± 1.7%) i/s -     63.781M in   5.064902s
```

After:

```
Warming up --------------------------------------
         read a cvar     1.336M i/100ms
Calculating -------------------------------------
         read a cvar     13.114M (± 3.6%) i/s -     65.488M in   5.000584s
```

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>

* Clean up function signatures / remove dead code

rb_vm_getclassvariable signature has changed and we don't need
rb_vm_get_cref.

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-11-18 12:11:53 -05:00
Aaron Patterson 57bf354c9a Eliminate some redundant checks on `num` in `newhash`
The `newhash` instruction was checking if `num` is greater than 0, but
so is [`rb_hash_new_with_size`](82e2443d8b/hash.c (L1564))
as well as [`rb_hash_bulk_insert`](82e2443d8b/hash.c (L4764)).

If we know the size is 0 in the instruction, we can just directly call
`rb_hash_new` and only check the size once.  Unfortunately, when num is
greater than 0, it's still checked 3 times.
2021-10-18 17:41:38 +09:00
Jeremy Evans 1f5f8a187a
Make Array#min/max optimization respect refined methods
Pass in ec to vm_opt_newarray_{max,min}. Avoids having to
call GET_EC inside the functions, for better performance.

While here, add a test for Array#min/max being redefined to
test_optimization.rb.

Fixes [Bug #18180]
2021-09-30 15:18:14 -07:00
Alan Wu edb34e3563
Fix typo in insns.def [ci skip] 2021-09-23 17:14:04 -04:00
eileencodes b91b3bc771 Add a cache for class variables
Redo of 34a2acdac788602c14bf05fb616215187badd504 and
931138b00696419945dc03e10f033b1f53cd50f3 which were reverted.

GitHub PR #4340.

This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.

The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.

```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19]

|         |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar  |      5.681M|   36.980M|
|         |           -|     6.51x|
```

Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.

Benchmark code:

```ruby
require "benchmark/ips"
require_relative "config/environment"

Benchmark.ips do |x|
  x.report "logger" do
    ActiveRecord::Base.logger
  end
end
```

Ruby 3.0 master / Rails 6.1:

```
Warming up --------------------------------------
              logger   155.251k i/100ms
Calculating -------------------------------------
```

Ruby 3.0 with cvar cache /  Rails 6.1:

```
Warming up --------------------------------------
              logger     1.546M i/100ms
Calculating -------------------------------------
              logger     14.857M (± 4.8%) i/s -     74.198M in   5.006202s
```

Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.

Ruby 3.0 master:

```
Warming up --------------------------------------
            1 module     1.231M i/100ms
          30 modules   432.020k i/100ms
         100 modules   145.399k i/100ms
Calculating -------------------------------------
            1 module     12.210M (± 2.1%) i/s -     61.553M in   5.043400s
          30 modules      4.354M (± 2.7%) i/s -     22.033M in   5.063839s
         100 modules      1.434M (± 2.9%) i/s -      7.270M in   5.072531s

Comparison:
            1 module: 12209958.3 i/s
          30 modules:  4354217.8 i/s - 2.80x  (± 0.00) slower
         100 modules:  1434447.3 i/s - 8.51x  (± 0.00) slower
```

Ruby 3.0 with cvar cache:

```
Warming up --------------------------------------
            1 module     1.641M i/100ms
          30 modules     1.655M i/100ms
         100 modules     1.620M i/100ms
Calculating -------------------------------------
            1 module     16.279M (± 3.8%) i/s -     82.038M in   5.046923s
          30 modules     15.891M (± 3.9%) i/s -     79.459M in   5.007958s
         100 modules     16.087M (± 3.6%) i/s -     81.005M in   5.041931s

Comparison:
            1 module: 16279458.0 i/s
         100 modules: 16087484.6 i/s - same-ish: difference falls within error
          30 modules: 15891406.2 i/s - same-ish: difference falls within error
```

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-06-18 10:02:44 -07:00
Eileen M. Uchitelle 2088a45798
[Bug #17880] Set leaf false on opt_setinlinecache (#4565)
This change fixes the bug described in https://bugs.ruby-lang.org/issues/17880.

Checking `ractor_shareable_p` will cause the method to call back into
Ruby. Anything calling this method can't be a leaf instruction,
otherwise it could crash. By adding `attr bool leaf = false` we no
longer crash because it marks the function as not a leaf.

Here's a simplified reproduction script:

```ruby
require "set"

class Id
  attr_reader :db_id
  def initialize(db_id)
    @db_id = db_id
  end

  def ==(other)
    other.class == self.class && other.db_id == db_id
  end
  alias_method :eql?, :==

  def hash
    10
  end

  def <=>(other)
    db_id <=> other.db_id if other.is_a?(self.class)
  end
end

class Namespace
  IDS = Set[
    Id.new(1).freeze,
    Id.new(2).freeze,
    Id.new(3).freeze,
    Id.new(4).freeze,
  ].freeze

  class << self
    def test?(id)
      IDS.include?(id)
    end
  end
end

p Namespace.test?(Id.new(1))
p Namespace.test?(Id.new(5))
```

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-06-14 17:34:57 -07:00
Aaron Patterson 07f055bb13
Revert "Filling cache values on cvar write"
This reverts commit 08de37f9fa.
This reverts commit e8ae922b62.
2021-05-11 13:31:00 -07:00
eileencodes 08de37f9fa Filling cache values on cvar write
Instead of on read. Once it's in the inline cache we never have to make
one again. We want to eventually put the value into the cache, and the
best opportunity to do that is when you write the value.
2021-05-11 12:04:27 -07:00
eileencodes e8ae922b62 Add a cache for class variables
This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.

The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.

```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105ca45) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be0093ae) [x86_64-darwin19]

|         |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar  |      5.681M|   36.980M|
|         |           -|     6.51x|
```

Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.

Benchmark code:

```ruby
require "benchmark/ips"
require_relative "config/environment"

Benchmark.ips do |x|
  x.report "logger" do
    ActiveRecord::Base.logger
  end
end
```

Ruby 3.0 master / Rails 6.1:

```
Warming up --------------------------------------
              logger   155.251k i/100ms
Calculating -------------------------------------
```

Ruby 3.0 with cvar cache /  Rails 6.1:

```
Warming up --------------------------------------
              logger     1.546M i/100ms
Calculating -------------------------------------
              logger     14.857M (± 4.8%) i/s -     74.198M in   5.006202s
```

Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.

Ruby 3.0 master:

```
Warming up --------------------------------------
            1 module     1.231M i/100ms
          30 modules   432.020k i/100ms
         100 modules   145.399k i/100ms
Calculating -------------------------------------
            1 module     12.210M (± 2.1%) i/s -     61.553M in   5.043400s
          30 modules      4.354M (± 2.7%) i/s -     22.033M in   5.063839s
         100 modules      1.434M (± 2.9%) i/s -      7.270M in   5.072531s

Comparison:
            1 module: 12209958.3 i/s
          30 modules:  4354217.8 i/s - 2.80x  (± 0.00) slower
         100 modules:  1434447.3 i/s - 8.51x  (± 0.00) slower
```

Ruby 3.0 with cvar cache:

```
Warming up --------------------------------------
            1 module     1.641M i/100ms
          30 modules     1.655M i/100ms
         100 modules     1.620M i/100ms
Calculating -------------------------------------
            1 module     16.279M (± 3.8%) i/s -     82.038M in   5.046923s
          30 modules     15.891M (± 3.9%) i/s -     79.459M in   5.007958s
         100 modules     16.087M (± 3.6%) i/s -     81.005M in   5.041931s

Comparison:
            1 module: 16279458.0 i/s
         100 modules: 16087484.6 i/s - same-ish: difference falls within error
          30 modules: 15891406.2 i/s - same-ish: difference falls within error
```

Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
2021-05-11 12:04:27 -07:00
ebrohman ede2616990 Fix type-o in insns.def
"redefine" -> "redefined"
2021-04-26 19:27:19 -04:00
Jeremy Evans 5512353d97 Remove reverse VM instruction
This was previously only used by the multiple assignment code, but
is no longer needed after the multiple assignment execution order
fix.
2021-04-21 16:29:26 -07:00
Aaron Patterson 8359821870 Use rb_fstring for "defined" strings.
We can take advantage of fstrings to de-duplicate the defined strings.
This means we don't need to keep the list of defined strings on the VM
(or register them as mark objects)
2021-03-17 10:55:37 -07:00
Aaron Patterson ea817c60fc Refactor vm_defined to return a boolean
We just need this function to return whether or not the thing we're
looking for is defined.  If it's defined, return something true,
otherwise false.
2021-03-17 10:55:37 -07:00
Aaron Patterson c3971bea33 Stop calling `rb_iseq_defined_string` in vm_defined
We already have access to the string from the iseqs, so we can stop
calling this function.
2021-03-17 10:55:37 -07:00
Aaron Patterson 17bf478de1 Store strings for `defined` in the iseqs
We can know the string used for "defined" calls at compile time, then
store the string in the instruction sequences
2021-03-17 10:55:37 -07:00
Koichi Sasada e7fc353f04 enable constant cache on ractors
constant cache `IC` is accessed by non-atomic manner and there are
thread-safety issues, so Ruby 3.0 disables to use const cache on
non-main ractors.

This patch enables it by introducing `imemo_constcache` and allocates
it by every re-fill of const cache like `imemo_callcache`.
[Bug #17510]

Now `IC` only has one entry `IC::entry` and it points to
`iseq_inline_constant_cache_entry`, managed by T_IMEMO object.

`IC` is atomic data structure so `rb_mjit_before_vm_ic_update()` and
`rb_mjit_after_vm_ic_update()` is not needed.
2021-01-05 02:27:58 +09:00
Takashi Kokubun 15e192070a
Fix a cyclic explanation 2020-12-25 19:17:16 -08:00
Koichi Sasada 9213771817 encourage inlining for vm_sendish()
Some tunings.
* add `inline` for vm_sendish()
* pass enum instead of func ptr to vm_sendish()
* reorder initial order of `calling` struct.
* add ALWAYS_INLINE for vm_search_method_fastpath()
* call vm_search_method_fastpath() from vm_sendish()
2020-12-17 16:53:41 +09:00
Takashi Kokubun 5d74894f2b
Lazily move PC with RUBY_VM_CHECK_INTS
```
$ benchmark-driver -v --rbenv 'before --jit;after --jit' --repeat-count=12 --alternate --output=all benchmark.yml
before --jit: ruby 3.0.0dev (2020-12-17T06:17:46Z master 3b4d698e0b) +JIT [x86_64-linux]
after --jit: ruby 3.0.0dev (2020-12-17T07:01:48Z master 843abb96f0) +JIT [x86_64-linux]
last_commit=Lazily move PC with RUBY_VM_CHECK_INTS
Calculating -------------------------------------
                                 before --jit           after --jit
Optcarrot Lan_Master.nes    80.29343646660429     83.15779723251525 fps
                            82.26755637885149     85.50197941326810
                            83.50682959728820     88.14657804306270
                            85.01236533133049     88.78201988978667
                            87.81799334561326     88.94841008936447
                            87.88228562393064     89.37925215601926
                            88.06695585889995     89.86143277214475
                            88.84730834922165     90.00773346420887
                            90.46317871213088     90.82603371104014
                            90.96308347148916     91.29797694822179
                            90.97945938504556     91.31086331868738
                            91.57127890154500     91.49949184318844
```
2020-12-16 23:06:28 -08:00
Takashi Kokubun 53babf35ef
Inline getconstant on JIT (#3906)
* Inline getconstant on JIT

* Support USE_MJIT=0
2020-12-16 06:24:07 -08:00
Koichi Sasada aa6287cd26 fix inline method cache sync bug
`cd` is passed to method call functions to method invocation
functions, but `cd` can be manipulated by other ractors simultaneously
so it contains thread-safety issue.

To solve this issue, this patch stores `ci` and found `cc` to `calling`
and stops to pass `cd`.
2020-12-15 13:29:30 +09:00
Takashi Kokubun 6b1d2de6cc
Unfortunately getinstancevariable was still not leaf
https://github.com/ruby/ruby/runs/1533401436
2020-12-10 14:40:29 -08:00
Jeremy Evans 699608487d Make getinstancevariable a leaf instruction
It can no longer issue a warning.
2020-12-10 10:16:05 -08:00
Koichi Sasada 344ec26a99 tuning trial: newobj with current ec
Passing current ec can improve performance of newobj. This patch
tries it for Array and String literals ([] and '').
2020-12-07 08:28:36 +09:00
Koichi Sasada f6661f5085 sync RClass::ext::iv_index_tbl
iv_index_tbl manages instance variable indexes (ID -> index).
This data structure should be synchronized with other ractors
so introduce some VM locks.

This patch also introduced atomic ivar cache used by
set/getinlinecache instructions. To make updating ivar cache (IVC),
we changed iv_index_tbl data structure to manage (ID -> entry)
and an entry points serial and index. IVC points to this entry so
that cache update becomes atomically.
2020-10-17 08:18:04 +09:00
Benoit Daloze 9b535f3ff7 Interpolated strings are no longer frozen with frozen-string-literal: true
* Remove freezestring instruction since this was the only usage for it.
* [Feature #17104]
2020-09-15 21:32:35 +02:00
卜部昌平 f66e0212ef precalc invokebuiltin destinations
Noticed that struct rb_builtin_function is a purely compile-time
constant.  MJIT can eliminate some runtime calculations by statically
generate dedicated C code generator for each builtin functions.
2020-07-13 08:56:18 +09:00
Koichi Sasada a0f12a0258
Use ID instead of GENTRY for gvars. (#3278)
Use ID instead of GENTRY for gvars.

Global variables are compiled into GENTRY (a pointer to struct
rb_global_entry). This patch replace this GENTRY to ID and
make the code simple.

We need to search GENTRY from ID every time (st_lookup), so
additional overhead will be introduced.
However, the performance of accessing global variables is not
important now a day and this simplicity helps Ractor development.
2020-07-03 16:56:44 +09:00
Takashi Kokubun 3e02cd518f
Trace :return of builtin methods
using opt_invokebuiltin_delegate_leave insn.

Since Ruby 2.7, :return of methods using builtin have not been traced properly.
2020-06-23 23:42:38 -07:00
Takashi Kokubun e544a3a23c
Remove obsoleted opt_call_c_function insn (#3232)
* Remove obsoleted opt_call_c_function insn

* Keep opt_call_c_function with DEFINE_INSN_IF
2020-06-17 09:16:01 -07:00
卜部昌平 40ced763b4 vm_insnhelper.c: merge opt_eq_func / opt_eql_func
These two function were almost identical, except in case of
T_STRING/T_FLOAT. Why not merge them into one, and let the difference be
handled in normal method calls (slowpath).  This does not improve
runtime performance for me, but at least reduces for instance rb_eql_opt
from 653 bytes to 86 bytes on my machine, according to nm(1).
2020-06-02 12:38:01 +09:00
Jeremy Evans 900e83b501 Turn class variable warnings into exceptions
This changes the following warnings:

* warning: class variable access from toplevel
* warning: class variable @foo of D is overtaken by C

into RuntimeErrors.  Handle defined?(@@foo) at toplevel
by returning nil instead of raising an exception (the previous
behavior warned before returning nil when defined? was used).

Refactor the specs to avoid the warnings even in older versions.
The specs were checking for the warnings, but the purpose of
the related specs as evidenced from their description is to
test for behavior, not for warnings.

Fixes [Bug #14541]
2020-04-10 00:29:05 -07:00
Koichi Sasada b9007b6c54 Introduce disposable call-cache.
This patch contains several ideas:

(1) Disposable inline method cache (IMC) for race-free inline method cache
    * Making call-cache (CC) as a RVALUE (GC target object) and allocate new
      CC on cache miss.
    * This technique allows race-free access from parallel processing
      elements like RCU.
(2) Introduce per-Class method cache (pCMC)
    * Instead of fixed-size global method cache (GMC), pCMC allows flexible
      cache size.
    * Caching CCs reduces CC allocation and allow sharing CC's fast-path
      between same call-info (CI) call-sites.
(3) Invalidate an inline method cache by invalidating corresponding method
    entries (MEs)
    * Instead of using class serials, we set "invalidated" flag for method
      entry itself to represent cache invalidation.
    * Compare with using class serials, the impact of method modification
      (add/overwrite/delete) is small.
    * Updating class serials invalidate all method caches of the class and
      sub-classes.
    * Proposed approach only invalidate the method cache of only one ME.

See [Feature #16614] for more details.
2020-02-22 09:58:59 +09:00
Koichi Sasada f2286925f0 VALUE size packed callinfo (ci).
Now, rb_call_info contains how to call the method with tuple of
(mid, orig_argc, flags, kwarg). Most of cases, kwarg == NULL and
mid+argc+flags only requires 64bits. So this patch packed
rb_call_info to VALUE (1 word) on such cases. If we can not
represent it in VALUE, then use imemo_callinfo which contains
conventional callinfo (rb_callinfo, renamed from rb_call_info).

iseq->body->ci_kw_size is removed because all of callinfo is VALUE
size (packed ci or a pointer to imemo_callinfo).

To access ci information, we need to use these functions:
vm_ci_mid(ci), _flag(ci), _argc(ci), _kwarg(ci).

struct rb_call_info_kw_arg is renamed to rb_callinfo_kwarg.

rb_funcallv_with_cc() and rb_method_basic_definition_p_with_cc()
is temporary removed because cd->ci should be marked.
2020-02-22 09:58:59 +09:00
Nobuyoshi Nakada bc8f28fbd0
Fixed a typo, missing "i" [ci skip] 2020-01-27 16:23:52 +09:00
Aaron Patterson 2c8d186c6e
Introduce an "Inline IVAR cache" struct
This commit introduces an "inline ivar cache" struct.  The reason we
need this is so compaction can differentiate from an ivar cache and a
regular inline cache.  Regular inline caches contain references to
`VALUE` and ivar caches just contain references to the ivar index.  With
this new struct we can easily update references for inline caches (but
not inline var caches as they just contain an int)
2019-12-05 13:37:02 -08:00
Koichi Sasada 36da0b3da1 check interrupts at each frame pop timing.
Asynchronous events such as signal trap, finalization timing,
thread switching and so on are managed by "interrupt_flag".
Ruby's threads check this flag periodically and if a thread
does not check this flag, above events doesn't happen.

This checking is CHECK_INTS() (related) macro and it is placed
at some places (laeve instruction and so on). However, at the end
of C methods, C blocks (IMEMO_IFUNC) etc there are no checking
and it can introduce uninterruptible thread.

To modify this situation, we decide to place CHECK_INTS() at
vm_pop_frame(). It increases interrupt checking points.
[Bug #16366]

This patch can introduce unexpected events...
2019-11-29 17:47:02 +09:00
Koichi Sasada f38b6d197f Revert "export for MJIT"
This reverts commit 2e6f1cf8b2.
2019-11-29 03:22:24 +09:00
Koichi Sasada 2e6f1cf8b2 export for MJIT 2019-11-29 03:17:52 +09:00
Koichi Sasada 5e34ab5406 add casts.
add casts to avoid compile error.
http://ci.rvm.jp/results/trunk_clang_39@silicon-docker/2402215
2019-11-18 10:36:48 +09:00
Koichi Sasada 71fee9bc72 vm_invoke_builtin_delegate with start index.
opt_invokebuiltin_delegate and opt_invokebuiltin_delegate_leave
invokes builtin functions with same parameters of the method.
This technique eliminate stack push operations. However, delegation
parameters should be completely same as given parameters.
(e.g. `def foo(a, b, c) __builtin_foo(a, b, c)` is okay, but
__builtin_foo(b, c) is not allowed)

This patch relaxes this restriction. ISeq has a local variables
table which includes parameters. For example, the method defined
as `def foo(a, b, c) x=y=nil`, then local variables table contains
[a, b, c, x, y]. If calling builtin-function with arguments which
are sub-array of the lvar table, use opt_invokebuiltin_delegate
instruction with start index. For example, `__builtin_foo(b, c)`,
`__builtin_bar(c, x, y)` is okay, and so on.
2019-11-18 10:16:11 +09:00
Nobuyoshi Nakada fb6a489af2
Revert "Method reference operator"
This reverts commit 67c5747369.
[Feature #16275]
2019-11-12 17:24:48 +09:00
Koichi Sasada 43ceedecc0 use STACK_ADDR_FROM_TOP()
vm_invoke_builtin() accesses VM stack via cfp->sp. However, MJIT
can use their own stack. To access them appropriately, we need to
use STACK_ADDR_FROM_TOP().
2019-11-09 16:18:58 +09:00
Koichi Sasada 46acd0075d support builtin features with Ruby and C.
Support loading builtin features written in Ruby, which implement
with C builtin functions.
[Feature #16254]

Several features:

(1) Load .rb file at boottime with native binary.

Now, prelude.rb is loaded at boottime. However, this file is contained
into the interpreter as a text format and we need to compile it.
This patch contains a feature to load from binary format.

(2) __builtin_func() in Ruby call func() written in C.

In Ruby file, we can write `__builtin_func()` like method call.
However this is not a method call, but special syntax to call
a function `func()` written in C. C functions should be defined
in a file (same compile unit) which load this .rb file.

Functions (`func` in above example) should be defined with
  (a) 1st parameter: rb_execution_context_t *ec
  (b) rest parameters (0 to 15).
  (c) VALUE return type.
This is very similar requirements for functions used by
rb_define_method(), however `rb_execution_context_t *ec`
is new requirement.

(3) automatic C code generation from .rb files.

tool/mk_builtin_loader.rb creates a C code to load .rb files
needed by miniruby and ruby command. This script is run by
BASERUBY, so *.rb should be written in BASERUBY compatbile
syntax. This script load a .rb file and find all of __builtin_
prefix method calls, and generate a part of C code to export
functions.

tool/mk_builtin_binary.rb creates a C code which contains
binary compiled Ruby files needed by ruby command.
2019-11-08 09:09:29 +09:00
Alan Wu 89e7997622 Combine call info and cache to speed up method invocation
To perform a regular method call, the VM needs two structs,
`rb_call_info` and `rb_call_cache`. At the moment, we allocate these two
structures in separate buffers. In the worst case, the CPU needs to read
4 cache lines to complete a method call. Putting the two structures
together reduces the maximum number of cache line reads to 2.

Combining the structures also saves 8 bytes per call site as the current
layout uses separate two pointers for the call info and the call cache.
This saves about 2 MiB on Discourse.

This change improves the Optcarrot benchmark at least 3%. For more
details, see attached bugs.ruby-lang.org ticket.

Complications:
 - A new instruction attribute `comptime_sp_inc` is introduced to
 calculate SP increase at compile time without using call caches. At
 compile time, a `TS_CALLDATA` operand points to a call info struct, but
 at runtime, the same operand points to a call data struct. Instruction
 that explicitly define `sp_inc` also need to define `comptime_sp_inc`.
 - MJIT code for copying call cache becomes slightly more complicated.
 - This changes the bytecode format, which might break existing tools.

[Misc #16258]
2019-10-24 18:03:42 +09:00
卜部昌平 eb92159d72 Revert https://github.com/ruby/ruby/pull/2486
This reverts commits: 10d6a3aca7 8ba48c1b85 fba8627dc1 dd883de5ba
6c6a25feca 167e6b48f1 7cb96d41a5 3207979278 595b3c4fdd 1521f7cf89
c11c5e69ac cf33608203 3632a812c0 f56506be0d 86427a3219 .

The reason for the revert is that we observe ABA problem around
inline method cache.  When a cache misshits, we search for a
method entry.  And if the entry is identical to what was cached
before, we reuse the cache.  But the commits we are reverting here
introduced situations where a method entry is freed, then the
identical memory region is used for another method entry.  An
inline method cache cannot detect that ABA.

Here is a code that reproduce such situation:

```ruby
require 'prime'

class << Integer
  alias org_sqrt sqrt
  def sqrt(n)
    raise
  end

  GC.stress = true
  Prime.each(7*37){} rescue nil # <- Here we populate CC
  class << Object.new; end

  # These adjacent remove-then-alias maneuver
  # frees a method entry, then immediately
  # reuses it for another.
  remove_method :sqrt
  alias sqrt org_sqrt
end

Prime.each(7*37).to_a # <- SEGV
```
2019-10-03 12:45:24 +09:00
卜部昌平 fba8627dc1 delete unnecessary branch
At last, not only myself but also your compiler are fully confident
that the method entries pointed from call caches are immutable.  We
don't have to worry about silent updates.  Just delete the branch
that is now always false.

Calculating -------------------------------------
                           ours       trunk
vm2_poly_same_method     2.142M      2.070M i/s -      6.000M times in 2.801148s 2.898994s

Comparison:
             vm2_poly_same_method
                ours:   2141979.2 i/s
               trunk:   2069683.8 i/s - 1.03x  slower
2019-09-30 10:26:38 +09:00