Граф коммитов

583 Коммитов

Автор SHA1 Сообщение Дата
S-H-GAMELINKS bdd6d8746f Replace RBOOL macro 2021-09-05 23:01:27 +09:00
Jeremy Evans 2d98593bf5 Support tracing of attr_reader and attr_writer
In vm_call_method_each_type, check for c_call and c_return events before
dispatching to vm_call_ivar and vm_call_attrset.  With this approach, the
call cache will still dispatch directly to those functions, so this
change will only decrease performance for the first (uncached) call, and
even then, the performance decrease is very minimal.

This approach requires that we clear the call caches when tracing is
enabled or disabled.  The approach currently switches all vm_call_ivar
and vm_call_attrset call caches to vm_call_general any time tracing is
enabled or disabled. So it could theoretically result in a slowdown for
code that constantly enables or disables tracing.

This approach does not handle targeted tracepoints, but from my testing,
c_call and c_return events are not supported for targeted tracepoints,
so that shouldn't matter.

This includes a benchmark showing the performance decrease is minimal
if detectable at all.

Fixes [Bug #16383]
Fixes [Bug #10470]

Co-authored-by: Takashi Kokubun <takashikkbn@gmail.com>
2021-08-29 07:23:39 -07:00
Nobuyoshi Nakada 4c3140d60f
Add keyrest to ruby2_keywords parameters [Bug #18011] 2021-08-03 10:56:50 +09:00
Jeremy Evans 64ac984129 Make RubyVM::AbstractSyntaxTree.of raise for method/proc created in eval
This changes Thread::Location::Backtrace#absolute_path to return
nil for methods/procs defined in eval.  If the realpath of an iseq
is nil, that indicates it was defined in eval, in which case you
cannot use RubyVM::AbstractSyntaxTree.of.

Fixes [Bug #16983]

Co-authored-by: Koichi Sasada <ko1@atdot.net>
2021-07-29 13:51:03 -07:00
Yusuke Endoh 68e1dc5172 iseq.c: Make ast_line_count return 0 when syntax error occurred
This broke coverage CI

```
  1) Failure:
TestRequire#test_load_syntax_error [/home/runner/work/actions/actions/ruby/test/ruby/test_require.rb:228]:
Exception(SyntaxError) with message matches to /unexpected/.
[SyntaxError] exception expected, not #<TypeError: no implicit conversion of false into Integer>.
```
https://github.com/ruby/actions/runs/2914743968?check_suite_focus=true
2021-06-26 00:15:16 +09:00
Yusuke Endoh 0a36cab1b5 Enable USE_ISEQ_NODE_ID by default
... which is formally called EXPERIMENTAL_ISEQ_NODE_ID.

See also ff69ef27b0.

https://bugs.ruby-lang.org/issues/17930
2021-06-18 03:35:38 +09:00
Yusuke Endoh dfba87cd62 Make it possible to get AST::Node from Thread::Backtrace::Location
RubyVM::AST.of(Thread::Backtrace::Location) returns a node that
corresponds to the location. Typically, the node is a method call, but
not always.

This change also includes iseq's dump/load support of node_ids for each
instructions.
2021-06-18 03:35:38 +09:00
Yusuke Endoh fb01411ae8 node.h: Reduce struct size to fit with Ruby object size (five VALUEs)
by merging `rb_ast_body_t#line_count` and `#script_lines`.

Fortunately `line_count == RARRAY_LEN(script_lines)` was always
satisfied. When script_lines is saved, it has an array of lines, and
when not saved, it has a Fixnum that represents the old line_count.
2021-06-18 02:34:27 +09:00
Takashi Kokubun 070caf54d2
Refactor rb_vm_insn_addr2insn calls
It's been a way too much amount of ifdefs.
2021-06-02 01:16:50 -07:00
Yusuke Endoh ff69ef27b0 compile.c: Pass node instead of nd_line(node) to ADD_INSN* functions
... then, new_insn_core extracts nd_line(node).

Also, if a macro "EXPERIMENTAL_ISEQ_NODE_ID" is defined, this changeset
keeps nd_node_id(node) for each instruction. This is intended for
TypeProf to identify what AST::Node corresponds to each instruction.

This patch is originally authored by @yui-knk for showing which column a
NoMethodError occurred.

https://github.com/ruby/ruby/compare/master...yui-knk:feature/node_id

Co-Authored-By: Yuichiro Kaneko <yui-knk@ruby-lang.org>
2021-05-07 17:02:15 +09:00
Aaron Patterson 8359821870 Use rb_fstring for "defined" strings.
We can take advantage of fstrings to de-duplicate the defined strings.
This means we don't need to keep the list of defined strings on the VM
(or register them as mark objects)
2021-03-17 10:55:37 -07:00
Aaron Patterson 17bf478de1 Store strings for `defined` in the iseqs
We can know the string used for "defined" calls at compile time, then
store the string in the instruction sequences
2021-03-17 10:55:37 -07:00
Koichi Sasada 954d6c7432 remove invalidated cc
if cc is invalidated, cc should be released from iseq.
2021-01-06 14:57:48 +09:00
Koichi Sasada e7fc353f04 enable constant cache on ractors
constant cache `IC` is accessed by non-atomic manner and there are
thread-safety issues, so Ruby 3.0 disables to use const cache on
non-main ractors.

This patch enables it by introducing `imemo_constcache` and allocates
it by every re-fill of const cache like `imemo_callcache`.
[Bug #17510]

Now `IC` only has one entry `IC::entry` and it points to
`iseq_inline_constant_cache_entry`, managed by T_IMEMO object.

`IC` is atomic data structure so `rb_mjit_before_vm_ic_update()` and
`rb_mjit_after_vm_ic_update()` is not needed.
2021-01-05 02:27:58 +09:00
Jeremy Evans 4a5c42db88 Make RubyVM::InstructionSequence.compile_file use same encoding as load
This switches the internal function from rb_parser_compile_file_path
to rb_parser_load_file, which is the same internal method that
Kernel#load uses.

Fixes [Bug #17308]
2020-11-19 07:12:50 +09:00
Koichi Sasada 084e7e31b2 remain enabled and line specified trace points
If two or more tracepoints enabled with the same target and with
different target lines, the only last line is activated.
This patch fixes this issue by remaining existing trace instructions.
[Bug #17302]
2020-11-17 07:33:38 +09:00
Nobuyoshi Nakada 799253dc46
strip trailing spaces [ci skip] 2020-10-30 12:26:59 +09:00
Koichi Sasada 07c03bc309 check isolated Proc more strictly
Isolated Proc prohibit to access outer local variables, but it was
violated by binding and so on, so they should be error.
2020-10-29 23:42:55 +09:00
Nobuyoshi Nakada 081cc4eb28
Dump FrozenCore specially 2020-10-20 23:52:19 +09:00
Koichi Sasada f6661f5085 sync RClass::ext::iv_index_tbl
iv_index_tbl manages instance variable indexes (ID -> index).
This data structure should be synchronized with other ractors
so introduce some VM locks.

This patch also introduced atomic ivar cache used by
set/getinlinecache instructions. To make updating ivar cache (IVC),
we changed iv_index_tbl data structure to manage (ID -> entry)
and an entry points serial and index. IVC points to this entry so
that cache update becomes atomically.
2020-10-17 08:18:04 +09:00
Alan Wu 1d8b689b9e Remove unused field in rb_iseq_constant_body
This was introduced in 191ce5344e
and has been unused since beae6cbf0f
2020-07-23 01:17:59 -04:00
Koichi Sasada a0f12a0258
Use ID instead of GENTRY for gvars. (#3278)
Use ID instead of GENTRY for gvars.

Global variables are compiled into GENTRY (a pointer to struct
rb_global_entry). This patch replace this GENTRY to ID and
make the code simple.

We need to search GENTRY from ID every time (st_lookup), so
additional overhead will be introduced.
However, the performance of accessing global variables is not
important now a day and this simplicity helps Ractor development.
2020-07-03 16:56:44 +09:00
Koichi Sasada 8070cb56db
fix return event and opt_invokebuiltin_delegate_leave (#3256)
If :return event is specified for a opt_invokebuiltin_delegate_leave
and leave combination, the instructions should be
  opt_invokebuiltin_delegate
  trace_return
instructions. To make it, opt_invokebuiltin_delegate_leave
instruction will be changed to opt_invokebuiltin_delegate even if
it is not an event target instruction.
2020-06-26 10:21:56 +09:00
Takashi Kokubun 737da8d383
Add another missing cast 2020-06-23 23:57:26 -07:00
Takashi Kokubun 6ecef1199e
Add missing cast 2020-06-23 23:50:31 -07:00
Takashi Kokubun 3e02cd518f
Trace :return of builtin methods
using opt_invokebuiltin_delegate_leave insn.

Since Ruby 2.7, :return of methods using builtin have not been traced properly.
2020-06-23 23:42:38 -07:00
Koichi Sasada b06d7c5521
ISeq created with callback is special, translation cannot be applied 2020-06-17 08:18:45 +09:00
卜部昌平 77293cef91 vm_ci_markable: added
CIs are created on-the-fly, which increases GC pressure.  However they
include no references to other objects, and those on-the-fly CIs tend to
be short lived.  Why not skip allocation of them.  In doing so we need
to add a flag denotes the CI object does not reside inside of objspace.
2020-06-09 09:52:46 +09:00
卜部昌平 9e41a75255 sed -i 's|ruby/impl|ruby/internal|'
To fix build failures.
2020-05-11 09:24:08 +09:00
卜部昌平 d7f4d732c1 sed -i s|ruby/3|ruby/impl|g
This shall fix compile errors.
2020-05-11 09:24:08 +09:00
Nobuyoshi Nakada 69b3e0ac59 Create succ_index_table as a part of `iseq_setup`
With compiling `CPDEBUG >= 2`, `rb_iseq_disasm` segfaults if this
table has not been created.  Also `ibf_load_iseq_each` calls
`rb_iseq_insns_info_encode_positions`.
2020-04-15 16:06:48 +09:00
Nobuyoshi Nakada f9822d1738
Shrink diassembled result string 2020-04-15 12:17:45 +09:00
Nobuyoshi Nakada e474c189da
Suppress -Wswitch warnings 2020-04-08 15:13:37 +09:00
卜部昌平 9e6e39c351
Merge pull request #2991 from shyouhei/ruby.h
Split ruby.h
2020-04-08 13:28:13 +09:00
Jeremy Evans d2c41b1bff Reduce allocations for keyword argument hashes
Previously, passing a keyword splat to a method always allocated
a hash on the caller side, and accepting arbitrary keywords in
a method allocated a separate hash on the callee side.  Passing
explicit keywords to a method that accepted a keyword splat
did not allocate a hash on the caller side, but resulted in two
hashes allocated on the callee side.

This commit makes passing a single keyword splat to a method not
allocate a hash on the caller side.  Passing multiple keyword
splats or a mix of explicit keywords and a keyword splat still
generates a hash on the caller side.  On the callee side,
if arbitrary keywords are not accepted, it does not allocate a
hash.  If arbitrary keywords are accepted, it will allocate a
hash, but this commit uses a callinfo flag to indicate whether
the caller already allocated a hash, and if so, the callee can
use the passed hash without duplicating it.  So this commit
should make it so that a maximum of a single hash is allocated
during method calls.

To set the callinfo flag appropriately, method call argument
compilation checks if only a single keyword splat is given.
If only one keyword splat is given, the VM_CALL_KW_SPLAT_MUT
callinfo flag is not set, since in that case the keyword
splat is passed directly and not mutable.  If more than one
splat is used, a new hash needs to be generated on the caller
side, and in that case the callinfo flag is set, indicating
the keyword splat is mutable by the callee.

In compile_hash, used for both hash and keyword argument
compilation, if compiling keyword arguments and only a
single keyword splat is used, pass the argument directly.

On the caller side, in vm_args.c, the callinfo flag needs to
be recognized and handled.  Because the keyword splat
argument may not be a hash, it needs to be converted to a
hash first if not.  Then, unless the callinfo flag is set,
the hash needs to be duplicated.  The temporary copy of the
callinfo flag, kw_flag, is updated if a hash was duplicated,
to prevent the need to duplicate it again.  If we are
converting to a hash or duplicating a hash, we need to update
the argument array, which can including duplicating the
positional splat array if one was passed.  CALLER_SETUP_ARG
and a couple other places needs to be modified to handle
similar issues for other types of calls.

This includes fairly comprehensive tests for different ways
keywords are handled internally, checking that you get equal
results but that keyword splats on the caller side result in
distinct objects for keyword rest parameters.

Included are benchmarks for keyword argument calls.
Brief results when compiled without optimization:

  def kw(a: 1) a end
  def kws(**kw) kw end
  h = {a: 1}

  kw(a: 1)       # about same
  kw(**h)        # 2.37x faster
  kws(a: 1)      # 1.30x faster
  kws(**h)       # 2.19x faster
  kw(a: 1, **h)  # 1.03x slower
  kw(**h, **h)   # about same
  kws(a: 1, **h) # 1.16x faster
  kws(**h, **h)  # 1.14x faster
2020-03-17 12:09:43 -07:00
Takashi Kokubun 8562bfd150
Move code to mark jit_unit's cc_entries to mjit.c 2020-03-12 22:21:32 -07:00
Takashi Kokubun da4b97a0e3
Pin and inline cme in JIT-ed method calls
```
$ benchmark-driver benchmark.yml -v --rbenv 'before --jit;after --jit' --repeat-count=12 --output=all
before --jit: ruby 2.8.0dev (2020-03-11T07:43:12Z master e89ebdcb87) +JIT [x86_64-linux]
after --jit: ruby 2.8.0dev (2020-03-11T07:54:18Z master 143776a0da) +JIT [x86_64-linux]
Calculating -------------------------------------
                                 before --jit           after --jit
Optcarrot Lan_Master.nes    73.86976729561439     77.20184819316513 fps
                            74.46997176460742     78.43493030231805
                            77.59686308754307     78.55714131655935
                            78.53693921126656     79.08984255596820
                            80.10158944910573     79.17751731838183
                            80.12254974411167     79.60853122429181
                            80.28678655204945     79.74674066871896
                            80.38690681095379     79.90624544440300
                            80.79223498756919     80.57881084206193
                            80.82857188422419     80.70677614429169
                            81.06447745878245     81.03868541295149
                            81.21620802278490     82.16354660940607
```
2020-03-11 00:59:34 -07:00
Takashi Kokubun 9511b4c8fa
Optimize away call data refs in JIT-ed method calls
According to ko1, `cd->cc != cc` was for GC.compact guard.
As we pin cc by rb_gc_mark(), we don't need the check.

```
$ benchmark-driver benchmark.yml -v --rbenv 'before --jit;after --jit' --repeat-count=12 --output=all
before --jit: ruby 2.8.0dev (2020-03-11T05:36:48Z master da6948753e) +JIT [x86_64-linux]
after --jit: ruby 2.8.0dev (2020-03-11T06:26:34Z master 36b20b8b4a) +JIT [x86_64-linux]
Calculating -------------------------------------
                                 before --jit           after --jit
Optcarrot Lan_Master.nes    74.03480698689405     71.63404803273507 fps
                            74.15085286586992     73.43923328104295
                            75.51738277744781     75.75465268365384
                            76.24922600109410     76.74071607861318
                            76.45513422802325     77.47521029238116
                            76.86617230739330     78.14759496269018
                            77.71509137131933     79.14051571125866
                            77.72839157096146     79.35884822673313
                            78.25218904561633     79.92538876408051
                            78.72521071333249     79.98075556706726
                            78.79950460165091     80.51747831497875
                            79.43884960720381     80.97973166525254
```
2020-03-10 23:29:50 -07:00
Takashi Kokubun 33b78b89ac
Eliminate unnecessary mjit_iseq_cc_entries calls
just in case.
2020-02-26 00:34:02 -08:00
Takashi Kokubun 69f377a3d6
Internalize rb_mjit_unit definition again
Fixed a TODO in b9007b6c54
2020-02-26 00:27:29 -08:00
Koichi Sasada 84d1a99a3f should be initialize jit_unit->cc_entries.
GC can invoke just after allocation of jit_unit->cc_entries so
it should be zero-cleared.
2020-02-25 13:37:52 +09:00
Koichi Sasada f744d80106 check USE_MJIT
iseq->body->jit_unit is not available if USE_MJIT==0 .
2020-02-22 11:54:19 +09:00
Koichi Sasada b9007b6c54 Introduce disposable call-cache.
This patch contains several ideas:

(1) Disposable inline method cache (IMC) for race-free inline method cache
    * Making call-cache (CC) as a RVALUE (GC target object) and allocate new
      CC on cache miss.
    * This technique allows race-free access from parallel processing
      elements like RCU.
(2) Introduce per-Class method cache (pCMC)
    * Instead of fixed-size global method cache (GMC), pCMC allows flexible
      cache size.
    * Caching CCs reduces CC allocation and allow sharing CC's fast-path
      between same call-info (CI) call-sites.
(3) Invalidate an inline method cache by invalidating corresponding method
    entries (MEs)
    * Instead of using class serials, we set "invalidated" flag for method
      entry itself to represent cache invalidation.
    * Compare with using class serials, the impact of method modification
      (add/overwrite/delete) is small.
    * Updating class serials invalidate all method caches of the class and
      sub-classes.
    * Proposed approach only invalidate the method cache of only one ME.

See [Feature #16614] for more details.
2020-02-22 09:58:59 +09:00
Koichi Sasada f2286925f0 VALUE size packed callinfo (ci).
Now, rb_call_info contains how to call the method with tuple of
(mid, orig_argc, flags, kwarg). Most of cases, kwarg == NULL and
mid+argc+flags only requires 64bits. So this patch packed
rb_call_info to VALUE (1 word) on such cases. If we can not
represent it in VALUE, then use imemo_callinfo which contains
conventional callinfo (rb_callinfo, renamed from rb_call_info).

iseq->body->ci_kw_size is removed because all of callinfo is VALUE
size (packed ci or a pointer to imemo_callinfo).

To access ci information, we need to use these functions:
vm_ci_mid(ci), _flag(ci), _argc(ci), _kwarg(ci).

struct rb_call_info_kw_arg is renamed to rb_callinfo_kwarg.

rb_funcallv_with_cc() and rb_method_basic_definition_p_with_cc()
is temporary removed because cd->ci should be marked.
2020-02-22 09:58:59 +09:00
卜部昌平 b223a78a71 this ternary operator is an undefined behaviour
Let me quote ISO/IEC 9899:2018 section 6.5.15:

> Constraints
>
> The first operand shall have scalar type.
> One of the following shall hold for the second and third operands:
> — both operands have arithmetic type;
> — both operands have the same structure or union type;
> — both operands have void type;
(snip)

Here, `*option` is a const struct rb_compile_option_struct. OTOH
`COMPILE_OPTION_DEFAULT` is a struct rb_compile_option_struct, without
const.   These two are _not_ the "same structure or union type".  Hence
the expression renders undefined behaviour.  COMPILE_OPTION_DEFAULT is
not a const because `RubyVM::InstructionSequence.compile_option=`
touches its internals on-the-fly.  There is no way to meet the
constraints quoted above.

Using ternary operator here was a mistake at the first place.  Let's
just replace it with a normal `if` statement.
2020-02-06 11:46:51 +09:00
0x005c 461db352c2 Rename RUBY_MARK_NO_PIN_UNLESS_NULL to RUBY_MARK_MOVABLE_UNLESS_NULL 2020-01-23 00:11:03 +13:00
卜部昌平 5e22f873ed decouple internal.h headers
Saves comitters' daily life by avoid #include-ing everything from
internal.h to make each file do so instead.  This would significantly
speed up incremental builds.

We take the following inclusion order in this changeset:

1.  "ruby/config.h", where _GNU_SOURCE is defined (must be the very
    first thing among everything).
2.  RUBY_EXTCONF_H if any.
3.  Standard C headers, sorted alphabetically.
4.  Other system headers, maybe guarded by #ifdef
5.  Everything else, sorted alphabetically.

Exceptions are those win32-related headers, which tend not be self-
containing (headers have inclusion order dependencies).
2019-12-26 20:45:12 +09:00
Nobuyoshi Nakada db16629008
Fixed misspellings
Fixed misspellings reported at [Bug #16437], only in ruby and rubyspec.
2019-12-20 09:32:42 +09:00
Yusuke Endoh 60c53ff6ee vm_core.h (iseq_unique_id): prefer uintptr_t instead of unsigned long
It produced a warning about type cast in LLP64 (i.e., windows).
2019-12-10 17:12:21 +09:00
Yusuke Endoh 156fb72d70 vm_args.c (rb_warn_check): Use iseq_unique_id instead of its pointer
(This is the second try of 036bc1da6c6c9b0fa9b7f5968d897a9554dd770e.)

If iseq is GC'ed, the pointer of iseq may be reused, which may hide a
deprecation warning of keyword argument change.

http://ci.rvm.jp/results/trunk-test1@phosphorus-docker/2474221

```
1) Failure:
TestKeywordArguments#test_explicit_super_kwsplat [/tmp/ruby/v2/src/trunk-test1/test/ruby/test_keyword.rb:549]:
--- expected
+++ actual
@@ -1 +1 @@
-/The keyword argument is passed as the last hash parameter.* for `m'/m
+""
```

This change ad-hocly adds iseq_unique_id for each iseq, and use it
instead of iseq pointer.  This covers the case where caller is GC'ed.
Still, the case where callee is GC'ed, is not covered.

But anyway, it is very rare that iseq is GC'ed.  Even when it occurs, it
just hides some warnings.  It's no big deal.
2019-12-09 15:22:48 +09:00