Noticed that struct rb_builtin_function is a purely compile-time
constant. MJIT can eliminate some runtime calculations by statically
generate dedicated C code generator for each builtin functions.
Use ID instead of GENTRY for gvars.
Global variables are compiled into GENTRY (a pointer to struct
rb_global_entry). This patch replace this GENTRY to ID and
make the code simple.
We need to search GENTRY from ID every time (st_lookup), so
additional overhead will be introduced.
However, the performance of accessing global variables is not
important now a day and this simplicity helps Ractor development.
These two function were almost identical, except in case of
T_STRING/T_FLOAT. Why not merge them into one, and let the difference be
handled in normal method calls (slowpath). This does not improve
runtime performance for me, but at least reduces for instance rb_eql_opt
from 653 bytes to 86 bytes on my machine, according to nm(1).
This changes the following warnings:
* warning: class variable access from toplevel
* warning: class variable @foo of D is overtaken by C
into RuntimeErrors. Handle defined?(@@foo) at toplevel
by returning nil instead of raising an exception (the previous
behavior warned before returning nil when defined? was used).
Refactor the specs to avoid the warnings even in older versions.
The specs were checking for the warnings, but the purpose of
the related specs as evidenced from their description is to
test for behavior, not for warnings.
Fixes [Bug #14541]
This patch contains several ideas:
(1) Disposable inline method cache (IMC) for race-free inline method cache
* Making call-cache (CC) as a RVALUE (GC target object) and allocate new
CC on cache miss.
* This technique allows race-free access from parallel processing
elements like RCU.
(2) Introduce per-Class method cache (pCMC)
* Instead of fixed-size global method cache (GMC), pCMC allows flexible
cache size.
* Caching CCs reduces CC allocation and allow sharing CC's fast-path
between same call-info (CI) call-sites.
(3) Invalidate an inline method cache by invalidating corresponding method
entries (MEs)
* Instead of using class serials, we set "invalidated" flag for method
entry itself to represent cache invalidation.
* Compare with using class serials, the impact of method modification
(add/overwrite/delete) is small.
* Updating class serials invalidate all method caches of the class and
sub-classes.
* Proposed approach only invalidate the method cache of only one ME.
See [Feature #16614] for more details.
Now, rb_call_info contains how to call the method with tuple of
(mid, orig_argc, flags, kwarg). Most of cases, kwarg == NULL and
mid+argc+flags only requires 64bits. So this patch packed
rb_call_info to VALUE (1 word) on such cases. If we can not
represent it in VALUE, then use imemo_callinfo which contains
conventional callinfo (rb_callinfo, renamed from rb_call_info).
iseq->body->ci_kw_size is removed because all of callinfo is VALUE
size (packed ci or a pointer to imemo_callinfo).
To access ci information, we need to use these functions:
vm_ci_mid(ci), _flag(ci), _argc(ci), _kwarg(ci).
struct rb_call_info_kw_arg is renamed to rb_callinfo_kwarg.
rb_funcallv_with_cc() and rb_method_basic_definition_p_with_cc()
is temporary removed because cd->ci should be marked.
This commit introduces an "inline ivar cache" struct. The reason we
need this is so compaction can differentiate from an ivar cache and a
regular inline cache. Regular inline caches contain references to
`VALUE` and ivar caches just contain references to the ivar index. With
this new struct we can easily update references for inline caches (but
not inline var caches as they just contain an int)
Asynchronous events such as signal trap, finalization timing,
thread switching and so on are managed by "interrupt_flag".
Ruby's threads check this flag periodically and if a thread
does not check this flag, above events doesn't happen.
This checking is CHECK_INTS() (related) macro and it is placed
at some places (laeve instruction and so on). However, at the end
of C methods, C blocks (IMEMO_IFUNC) etc there are no checking
and it can introduce uninterruptible thread.
To modify this situation, we decide to place CHECK_INTS() at
vm_pop_frame(). It increases interrupt checking points.
[Bug #16366]
This patch can introduce unexpected events...
opt_invokebuiltin_delegate and opt_invokebuiltin_delegate_leave
invokes builtin functions with same parameters of the method.
This technique eliminate stack push operations. However, delegation
parameters should be completely same as given parameters.
(e.g. `def foo(a, b, c) __builtin_foo(a, b, c)` is okay, but
__builtin_foo(b, c) is not allowed)
This patch relaxes this restriction. ISeq has a local variables
table which includes parameters. For example, the method defined
as `def foo(a, b, c) x=y=nil`, then local variables table contains
[a, b, c, x, y]. If calling builtin-function with arguments which
are sub-array of the lvar table, use opt_invokebuiltin_delegate
instruction with start index. For example, `__builtin_foo(b, c)`,
`__builtin_bar(c, x, y)` is okay, and so on.
vm_invoke_builtin() accesses VM stack via cfp->sp. However, MJIT
can use their own stack. To access them appropriately, we need to
use STACK_ADDR_FROM_TOP().
Support loading builtin features written in Ruby, which implement
with C builtin functions.
[Feature #16254]
Several features:
(1) Load .rb file at boottime with native binary.
Now, prelude.rb is loaded at boottime. However, this file is contained
into the interpreter as a text format and we need to compile it.
This patch contains a feature to load from binary format.
(2) __builtin_func() in Ruby call func() written in C.
In Ruby file, we can write `__builtin_func()` like method call.
However this is not a method call, but special syntax to call
a function `func()` written in C. C functions should be defined
in a file (same compile unit) which load this .rb file.
Functions (`func` in above example) should be defined with
(a) 1st parameter: rb_execution_context_t *ec
(b) rest parameters (0 to 15).
(c) VALUE return type.
This is very similar requirements for functions used by
rb_define_method(), however `rb_execution_context_t *ec`
is new requirement.
(3) automatic C code generation from .rb files.
tool/mk_builtin_loader.rb creates a C code to load .rb files
needed by miniruby and ruby command. This script is run by
BASERUBY, so *.rb should be written in BASERUBY compatbile
syntax. This script load a .rb file and find all of __builtin_
prefix method calls, and generate a part of C code to export
functions.
tool/mk_builtin_binary.rb creates a C code which contains
binary compiled Ruby files needed by ruby command.
To perform a regular method call, the VM needs two structs,
`rb_call_info` and `rb_call_cache`. At the moment, we allocate these two
structures in separate buffers. In the worst case, the CPU needs to read
4 cache lines to complete a method call. Putting the two structures
together reduces the maximum number of cache line reads to 2.
Combining the structures also saves 8 bytes per call site as the current
layout uses separate two pointers for the call info and the call cache.
This saves about 2 MiB on Discourse.
This change improves the Optcarrot benchmark at least 3%. For more
details, see attached bugs.ruby-lang.org ticket.
Complications:
- A new instruction attribute `comptime_sp_inc` is introduced to
calculate SP increase at compile time without using call caches. At
compile time, a `TS_CALLDATA` operand points to a call info struct, but
at runtime, the same operand points to a call data struct. Instruction
that explicitly define `sp_inc` also need to define `comptime_sp_inc`.
- MJIT code for copying call cache becomes slightly more complicated.
- This changes the bytecode format, which might break existing tools.
[Misc #16258]
This reverts commits: 10d6a3aca78ba48c1b85fba8627dc1dd883de5ba6c6a25feca167e6b48f17cb96d41a53207979278595b3c4fdd1521f7cf89c11c5e69accf336082033632a812c0f56506be0d86427a3219 .
The reason for the revert is that we observe ABA problem around
inline method cache. When a cache misshits, we search for a
method entry. And if the entry is identical to what was cached
before, we reuse the cache. But the commits we are reverting here
introduced situations where a method entry is freed, then the
identical memory region is used for another method entry. An
inline method cache cannot detect that ABA.
Here is a code that reproduce such situation:
```ruby
require 'prime'
class << Integer
alias org_sqrt sqrt
def sqrt(n)
raise
end
GC.stress = true
Prime.each(7*37){} rescue nil # <- Here we populate CC
class << Object.new; end
# These adjacent remove-then-alias maneuver
# frees a method entry, then immediately
# reuses it for another.
remove_method :sqrt
alias sqrt org_sqrt
end
Prime.each(7*37).to_a # <- SEGV
```
At last, not only myself but also your compiler are fully confident
that the method entries pointed from call caches are immutable. We
don't have to worry about silent updates. Just delete the branch
that is now always false.
Calculating -------------------------------------
ours trunk
vm2_poly_same_method 2.142M 2.070M i/s - 6.000M times in 2.801148s 2.898994s
Comparison:
vm2_poly_same_method
ours: 2141979.2 i/s
trunk: 2069683.8 i/s - 1.03x slower
I noticed that in case of cache misshit, re-calculated cc->me can
be the same method entry than the pevious one. That is an okay
situation but can't we partially reuse the cache, because cc->call
should still be valid then?
One thing that has to be special-cased is when the method entry
gets amended by some refinements. That happens behind-the-scene
of call cache mechanism. We have to check if cc->me->def points to
the previously saved one.
Calculating -------------------------------------
trunk ours
vm2_poly_same_method 1.534M 2.025M i/s - 6.000M times in 3.910203s 2.962752s
Comparison:
vm2_poly_same_method
ours: 2025143.9 i/s
trunk: 1534447.2 i/s - 1.32x slower
Some tooling depends on the current bytecode, and adding an operand
changes the bytecode. While tooling can be updated for new bytecode,
this support doesn't warrant such a change.
This was an intentional bug added in 1.9.
The approach taken here is to add a second operand to the
getconstant instruction for whether nil should be allowed and
treated as current scope.
Fixes [Bug #11718]
* insns.def: add definemethod and definesmethod (singleton method)
instructions. Old YARV contains these instructions, but it is moved
to methods of FrozenCore class because remove number of instructions
can improve performance for some techniques (static stack caching
and so on). However, we don't employ these technique and it is hard
to optimize/analysis definition sequence. So I decide to introduce
them (and remove definition methods). `putiseq` insn is also removed.
* vm_method.c (rb_scope_visibility_get): renamed to
`vm_scope_visibility_get()` and make it accept `ec`.
Same for `vm_scope_module_func_check()`.
These fixes are result of refactoring `vm_define_method`.
* vm_insnhelper.c (rb_vm_get_cref): renamed to `vm_get_cref`
because of consistency with other functions.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67442 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
related: r66982
Sadly opt_regexpmatch2 was not a leaf insn either.
http://ci.rvm.jp/results/trunk-vm-asserts@silicon-docker/1751213
CHECK_INTERRUPT_IN_MATCH_AT is just like RUBY_VM_CHECK_INTS, and it may
call arbitrary Ruby method, for example a GC finalizer from postponed
job in this case.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67091 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
Given `str`, if `str_coderange(str)` is `ENC_CODERANGE_BROKEN`,
it calls `rb_raise`. And it calls `rb_funcallv` from `rb_exc_new3`.
http://ci.rvm.jp/results/trunk-vm-asserts@silicon-docker/1673244
Maybe we can have a function to directly call `exc_initialize` for this
purpose, but it may not be worth having such a function for keeping the
instruction leaf. We may even want to delete the insn
https://github.com/ruby/ruby/pull/1959.
I'm not sure whether compile.c could generate opt_regexpmatch2 for
invalid coderange string. Let's monitor that for a while.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66982 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
These instructions were missed before. The stack canary mechanism
(see r64677) can not detect rb_raise() because exceptions jump over
the canary liveness check.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66980 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
They are considered Array and Hash creation events, so
allow dtrace (and systemtap) to track those creations.
Co-Authored-By: Eric Wong <e@80x24.org>
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66767 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
- FIXNUM_2_P: moved to vm_insnhelper.c because that is the only
place this macro is used.
- FLONUM_2_P: ditto.
- FLOAT_HEAP_P: not used anywhere.
- FLOAT_INSTANCE_P: ditto.
- GET_TOS: ditto.
- USE_IC_FOR_SPECIALIZED_METHOD: ditto.
- rb_obj_hidden_p: ditto.
- REG_A: ditto.
- REG_B: ditto.
- GET_CONST_INLINE_CACHE: ditto.
- vm_regan_regtype: moved inside of VM_COLLECT_USAGE_DETAILS
because that os the only place this enum is used.
- vm_regan_acttype: ditto.
- GET_GLOBAL: used only once. Removed with replacing that usage.
- SET_GLOBAL: ditto.
- rb_method_definition_create: declaration moved to
vm_insnhelper.c because that is the only place this declaration
makes sense.
- rb_method_definition_set: ditto.
- rb_method_definition_eq: ditto.
- rb_make_no_method_exception: ditto.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66597 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
Just add more room for comments. This is a pure refactoring that does
not change anything but readability.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66564 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
The instructions are just for optimization. To clarity the intention,
this change adds the prefix "opt_", like "opt_case_dispatch".
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@65600 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
In these expressions `1` is of type `signed int` (cf: ISO/IEC
9899:1990 section 6.1.3.2). The variable (e.g. `num`) is of type
`rb_num_t`, which is in fact `unsigned long`. These two expressions
then exercises the "usual arithmetic conversions" (cf: ISO/IEC
9899:1990 section 6.2.1.5) and both eventually become `unsigned long`.
The two unsigned expressions are then subtracted to generate another
unsigned integer expression (cf: ISO/IEC 9899:1990 section 6.3.6).
This is where integer overflows can occur. OTOH the left hand side of
the assignments are `rb_snum_t` which is `signed long`. The
assignments exercise the "implicit conversion" of "an unsigned integer
is converted to its corresponding signed integer" case (cf: ISO/IEC
9899:1990 section 6.2.1.2), which is "implementation-defined" (read:
not portable).
Casts are the proper way to avoid this problem. Because all
expressions are converted to some integer types before any binary
operations are performed, the assignments now have fully defined
behaviour. These values can never exceed LONG_MAX so the casts must
not lose any information.
See also: https://travis-ci.org/ruby/ruby/jobs/451726874#L4357
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@65595 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* transient_heap.c, transient_heap.h: implement TransientHeap (theap).
theap is designed for Ruby's object system. theap is like Eden heap
on generational GC terminology. theap allocation is very fast because
it only needs to bump up pointer and deallocation is also fast because
we don't do anything. However we need to evacuate (Copy GC terminology)
if theap memory is long-lived. Evacuation logic is needed for each type.
See [Bug #14858] for details.
* array.c: Now, theap for T_ARRAY is supported.
ary_heap_alloc() tries to allocate memory area from theap. If this trial
sccesses, this array has theap ptr and RARRAY_TRANSIENT_FLAG is turned on.
We don't need to free theap ptr.
* ruby.h: RARRAY_CONST_PTR() returns malloc'ed memory area. It menas that
if ary is allocated at theap, force evacuation to malloc'ed memory.
It makes programs slow, but very compatible with current code because
theap memory can be evacuated (theap memory will be recycled).
If you want to get transient heap ptr, use RARRAY_CONST_PTR_TRANSIENT()
instead of RARRAY_CONST_PTR(). If you can't understand when evacuation
will occur, use RARRAY_CONST_PTR().
(re-commit of r65444)
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@65449 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* transient_heap.c, transient_heap.h: implement TransientHeap (theap).
theap is designed for Ruby's object system. theap is like Eden heap
on generational GC terminology. theap allocation is very fast because
it only needs to bump up pointer and deallocation is also fast because
we don't do anything. However we need to evacuate (Copy GC terminology)
if theap memory is long-lived. Evacuation logic is needed for each type.
See [Bug #14858] for details.
* array.c: Now, theap for T_ARRAY is supported.
ary_heap_alloc() tries to allocate memory area from theap. If this trial
sccesses, this array has theap ptr and RARRAY_TRANSIENT_FLAG is turned on.
We don't need to free theap ptr.
* ruby.h: RARRAY_CONST_PTR() returns malloc'ed memory area. It menas that
if ary is allocated at theap, force evacuation to malloc'ed memory.
It makes programs slow, but very compatible with current code because
theap memory can be evacuated (theap memory will be recycled).
If you want to get transient heap ptr, use RARRAY_CONST_PTR_TRANSIENT()
instead of RARRAY_CONST_PTR(). If you can't understand when evacuation
will occur, use RARRAY_CONST_PTR().
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@65444 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
The idea behind this commit is that handles_sp and leaf are two
concepts that are not mutually independent. By making one explicitly
depend another, we can reduces the number of lines of codes written,
thus making things concise.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@65426 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* insns.def (newhashfromarray): `rb_hash_bulk_insert()` can call
Ruby methods like #hash so that it should not be a leaf insn.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@65345 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
The instructions were used only for branch coverage.
Instead, it now uses a trace framework [Feature #14104].
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@65225 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* insns.def (opt_send_without_block): reorder insn position because
`opt_str_freeze` insn refer this insn (function) when
OPT_CALL_THREADED_CODE is true.
* vm_opts.h (OPT_THREADED_CODE): introduce new macro to select
threaded code implementation with a compile option (-D...).
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64854 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
because r64849 seems to fix issues which we were confused about.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64850 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
as a workaround to fix the build pipeline broken by r64824,
because optimizing Ruby should be prioritized higher than supporting unused jokes.
In the current build system, exceeding 200 insns somehow crashes C
extension build on some of MinGW environments like "mingw32-make[1]:
*** No rule to make target 'note'. Stop."
https://ci.appveyor.com/project/ruby/ruby/build/9725/job/co4nu9jugm8qwdrp
and on some of Linux environments like "cannot load such file -- stringio (LoadError)"
```
build_install /home/ko1/ruby/src/trunk_gcc5/lib/rubygems/specification.rb:18:in `require': cannot load such file -- stringio (LoadError)
from /home/ko1/ruby/src/trunk_gcc5/lib/rubygems/specification.rb:18:in `<top (required)>'
from /home/ko1/ruby/src/trunk_gcc5/lib/rubygems.rb:1365:in `require'
from /home/ko1/ruby/src/trunk_gcc5/lib/rubygems.rb:1365:in `<module:Gem>'
from /home/ko1/ruby/src/trunk_gcc5/lib/rubygems.rb:116:in `<top (required)>'
from /home/ko1/ruby/src/trunk_gcc5/tool/rbinstall.rb:24:in `require'
from /home/ko1/ruby/src/trunk_gcc5/tool/rbinstall.rb:24:in `<main>'
make: *** [do-install-nodoc] Error 1
```
http://ci.rvm.jp/results/trunk_gcc5@silicon-docker/1353447
This commit removes "bitblt" and "trace_bitblt" insns, which reduces the
number of insns from 202 to 200 and fixes at least the latter build
failure. I hope this fixes the MinGW build failure as well. Let me
confirm the situation on AppVeyor CI.
Note that this is hard to fix because some MinGW environments (MSP-Greg's
MinGW CI on AppVeyor) don't reproduce this and some Linux environments
(including my local machine) don't reproduce it either. Make sure you
have the reproductive environment and confirm it's fixed when reverting
this commit.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64839 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
This reverts commit r64829. I'll prepare another temporary fix, but I'll
separately commit that to make it easier to revert that later.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64838 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
not optimizing Array#& and Array#| because vm_insnhelper.c can't easily
inline it (large amount of array.c code would be needed in vm_insnhelper.c)
and the method body is a little complicated compared to Integer's ones.
So I thought only Integer#& and Integer#| have a significant impact,
and eliminating unnecessary branches would contribute to JIT's performance.
vm_insnhelper.c: ditto
tool/transform_mjit_header.rb: make sure these instructions are inlined
on JIT.
compile.c: compile vm_opt_and and vm_opt_or.
id.def: define id for them to be used in compile.c and vm*.c
vm.c: track redefinition of Integer#& and Integer#|
vm_core.h: allow detecting redefinition of & and |
test/ruby/test_jit.rb: test new insns
test/ruby/test_optimization.rb: ditto
* Optcarrot benchmark
This is a kind of experimental thing but I'm committing this since the
performance impact is significant especially on Optcarrot with JIT.
$ benchmark-driver benchmark.yml --rbenv 'before::before --disable-gems;before+JIT::before --disable-gems --jit;after::after --disable-gems;after+JIT::after --disable-gems --jit' -v --repeat-count 24
before: ruby 2.6.0dev (2018-09-24 trunk 64821) [x86_64-linux]
before+JIT: ruby 2.6.0dev (2018-09-24 trunk 64821) +JIT [x86_64-linux]
after: ruby 2.6.0dev (2018-09-24 opt_and 64821) [x86_64-linux]
last_commit=opt_or
after+JIT: ruby 2.6.0dev (2018-09-24 opt_and 64821) +JIT [x86_64-linux]
last_commit=opt_or
Calculating -------------------------------------
before before+JIT after after+JIT
Optcarrot Lan_Master.nes 51.460 66.315 53.023 71.173 fps
Comparison:
Optcarrot Lan_Master.nes
after+JIT: 71.2 fps
before+JIT: 66.3 fps - 1.07x slower
after: 53.0 fps - 1.34x slower
before: 51.5 fps - 1.38x slower
[close https://github.com/ruby/ruby/pull/1963]
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64824 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
Now that we can say for sure if an instruction calls a method or
not internally, it is now possible to reroute the bugs that
forced us to revert the "move PC around" optimization.
First try: r62051
Reverted: r63763
See also: r63999
----
trunk: ruby 2.6.0dev (2018-09-13 trunk 64736) [x86_64-darwin15]
ours: ruby 2.6.0dev (2018-09-13 trunk 64736) [x86_64-darwin15]
last_commit=move ADD_PC around (take 2)
Calculating -------------------------------------
trunk ours
so_ackermann 1.884 2.278 i/s - 1.000 times in 0.530926s 0.438935s
so_array 1.178 1.157 i/s - 1.000 times in 0.848786s 0.864467s
so_binary_trees 0.176 0.177 i/s - 1.000 times in 5.683895s 5.657707s
so_concatenate 0.220 0.221 i/s - 1.000 times in 4.546896s 4.518949s
so_count_words 6.729 6.470 i/s - 1.000 times in 0.148602s 0.154561s
so_exception 3.324 3.688 i/s - 1.000 times in 0.300872s 0.271147s
so_fannkuch 0.546 0.968 i/s - 1.000 times in 1.831328s 1.033376s
so_fasta 0.541 0.547 i/s - 1.000 times in 1.849923s 1.827091s
so_k_nucleotide 0.800 0.777 i/s - 1.000 times in 1.250635s 1.286295s
so_lists 2.101 1.848 i/s - 1.000 times in 0.475954s 0.541095s
so_mandelbrot 0.435 0.408 i/s - 1.000 times in 2.299328s 2.450535s
so_matrix 1.946 1.912 i/s - 1.000 times in 0.513872s 0.523076s
so_meteor_contest 0.311 0.317 i/s - 1.000 times in 3.219297s 3.152052s
so_nbody 0.746 0.703 i/s - 1.000 times in 1.339815s 1.423441s
so_nested_loop 0.899 0.901 i/s - 1.000 times in 1.111767s 1.109555s
so_nsieve 0.559 0.579 i/s - 1.000 times in 1.787763s 1.726552s
so_nsieve_bits 0.435 0.428 i/s - 1.000 times in 2.296282s 2.333852s
so_object 1.368 1.442 i/s - 1.000 times in 0.731237s 0.693684s
so_partial_sums 0.616 0.546 i/s - 1.000 times in 1.623592s 1.833097s
so_pidigits 0.831 0.832 i/s - 1.000 times in 1.203117s 1.202334s
so_random 2.934 2.724 i/s - 1.000 times in 0.340791s 0.367150s
so_reverse_complement 0.583 0.866 i/s - 1.000 times in 1.714144s 1.154615s
so_sieve 1.829 2.081 i/s - 1.000 times in 0.546607s 0.480562s
so_spectralnorm 0.524 0.558 i/s - 1.000 times in 1.908716s 1.792382s
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64737 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
String#freeze can be redefined to be destructive. While such
redefinition is definitely weird, it should be possible. Resurrect
the string to prepare for that sort of things.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64691 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
Simply use DISPATCH_ORIGINAL_INSN instead of rb_funcall. This is,
when possible, overall performant because method dispatch results are
cached inside of CALL_CACHE. Should also be good for JIT.
----
trunk: ruby 2.6.0dev (2018-09-12 trunk 64689) [x86_64-darwin15]
ours: ruby 2.6.0dev (2018-09-12 leaf-insn 64688) [x86_64-darwin15]
last_commit=make opt_str_freeze leaf
Calculating -------------------------------------
trunk ours
vm2_freezestring 5.440M 31.411M i/s - 6.000M times in 1.102968s 0.191017s
Comparison:
vm2_freezestring
ours: 31410864.5 i/s
trunk: 5439865.4 i/s - 5.77x slower
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64690 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
This instruction can be written without rb_funcall. It not only boosts
performance of case statements, but also makes room of future JIT
improvements. Because opt_case_dispatch is about optimization this
should not be a bad thing to have.
----
trunk: ruby 2.6.0dev (2018-09-05 trunk 64634) [x86_64-darwin15]
ours: ruby 2.6.0dev (2018-09-12 leaf-insn 64688) [x86_64-darwin15]
last_commit=make opt_case_dispatch leaf
Calculating -------------------------------------
trunk ours
vm2_case_lit 1.366 2.012 i/s - 1.000 times in 0.731839s 0.497008s
Comparison:
vm2_case_lit
ours: 2.0 i/s
trunk: 1.4 i/s - 1.47x slower
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64689 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
An instruction is leaf if it has no rb_funcall inside. In order to
check this property, we introduce stack canary which is a random
number collected at runtime. Stack top is always filled with this
number and checked for stack smashing operations, when VM_CHECK_MODE.
[GH-1947]
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64677 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
_mjit_compile_send.erb: simplify code using the change
insns.def: adapt to the interface change
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64281 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
This is just a refactoring.
The receiver of "invokesuper" was a boolean to represent if it is ZSUPER
or not. This was used in vm_search_super_method to prohibit ZSUPER call
in define_method. (It is currently prohibited because of the limitation
of the implementation.)
This change removes the hack by introducing an explicit flag,
VM_CALL_SUPER, to signal the information. Now, the implementation of
"invokesuper" is consistent with "send" instruction.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64268 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
because it's more suitable to describe the current behavior now.
tool/ruby_vm/models/bare_instructions.rb: ditto.
tool/ruby_vm/views/_insn_entry.erb: ditto.
tool/ruby_vm/views/_mjit_compile_insn_body.erb: ditto.
tool/ruby_vm/views/_mjit_compile_pc_and_sp.erb: ditto.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64053 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
By the way, the original patch of r63988 was provided by wanabe:
https://github.com/wanabe/ruby/tree/local-stack
but I forgot to add his credit in the previous commit message.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63990 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
This optimization was reverted on r63863, but this commit resurrects the
optimization to skip some sp motions on JIT execution.
tool/ruby_vm/views/_mjit_compile_insn_body.erb: ditto
tool/ruby_vm/views/_mjit_compile_insn.erb: ditto
insns.def: resurrect handles_frame as handles_stack, which was deleted
on r63763.
tool/ruby_vm/models/bare_instructions.rb: ditto
vm_insnhelper.c: prevent moving sp outside insns.def to allow modifying
it by JIT.
* Optcarrot benchmark
$ benchmark-driver benchmark.yml --rbenv 'before --jit;after --jit' --repeat-count 12 -v
before --jit: ruby 2.6.0dev (2018-07-17 trunk 63987) +JIT [x86_64-linux]
after --jit: ruby 2.6.0dev (2018-07-17 local-stack 63987) +JIT [x86_64-linux]
last_commit=mjit_compile.c: resurrect local variable stack
Calculating -------------------------------------
before --jit after --jit
Optcarrot Lan_Master.nes 70.518 72.144 fps
Comparison:
Optcarrot Lan_Master.nes
after --jit: 72.1 fps
before --jit: 70.5 fps - 1.02x slower
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63988 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
r63655 was tightly coupled to handle_frames and some assumptions seems
to have been broken by r63763.
To partially resolve Bug#14892, this reverts the optimization for now. I
want to make MJIT CI happy first and then I'll probably retry r63655 by
partially reverting r63763 for sp changes.
The skipped test is not fixed yet.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63863 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
I introduced this mechanism in r62051 to speed things up. Later it
was reported that the change causes problems. I searched for
workarounds but nothing seemed appropriate. I hereby officially
give it up. The idea to move ADD_PC around was a mistake.
Fixes [Bug #14809] and [Bug #14834].
Signed-off-by: Urabe, Shyouhei <shyouhei@ruby-lang.org>
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63763 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
This is a pure refactoring. I see no difference in this change.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63756 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* insns.def (checktype): split branchiftype to checktype and
branchif, to make branch condition negation possible.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63225 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
We need to mark default values for kwarg methods. This also fixes
Bootsnap. IBF iseq loading needed to mark iseqs as "having markable
objects".
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62851 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
Directly marking iseq operands allows us to eliminate the "mark array"
stored on ISEQ objects, which will reduce the amount of memory ISEQ
objects consume. This patch changes the iseq mark function to:
* Directly marks ISEQ operands
* Iterate over and mark child ISEQs
It also introduces two flags on the ISEQ object. In order to mark
instruction operands, we have to disassemble the instructions and find
the instruction parameters and types. Instructions may also be
translated to jump addresses. Instruction sequences may get marked by
the GC *while* they're mid flight (being compiled). The
`ISEQ_TRANSLATED` flag is used to indicate whether or not the
instructions have been translated to jump addresses so that when we
decode the instructions we know whether or not we need to go from jump
location back to original instruction or not.
Not all ISEQ objects have any markable objects embedded in their
instructions. We can detect whether or not an ISEQ has markable objects
in the instructions at compile time. If the instructions contain
markable objects, we set a flag `ISEQ_MARKABLE_ISEQ` on the ISEQ object.
This means that during the mark phase, we can skip decompilation if the
flag is *not* set. In other words, we can avoid decompilation of we
know in advance there is nothing to mark.
`once` instructions have an operand that contains the result of a
one-time compilation of a regex. Before this patch, that operand was
called an "inline cache", even though the struct was actually an "inline
storage". This patch changes the operand to be an "inline storage" so
that we can differentiate between caches that need marking (the inline
storage) and caches that don't need marking (inline cache).
[ruby-core:84909]
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62706 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
if catch_except_p is FALSE. If catch_except_p is TRUE, stack values
should be on VM's stack when exception is thrown and the JIT-ed frame
is re-executed by VM's exception handler. If it's FALSE, the JIT-ed
frame won't be re-executed and don't need to keep values on VM's stack.
Using local variables allows us to reduce cfp->sp motion. Moving cfp->sp
is needed only for insns whose handles_frame? is false. So it improves
performance.
_mjit_compile_insn.erb: Prepare `stack_size` variable for GET_SP,
STACK_ADDR_FROM_TOP, TOPN macros. Share pc and sp motion partial view.
Use cancel handler created in mjit_compile.c.
_mjit_compile_send.erb: ditto. Also, when iseq->body->catch_except_p is
TRUE, this stops to call mjit_exec directly. I described the reason in
vm_insnhelper.h's comment for EXEC_EC_CFP.
_mjit_compile_pc_and_sp.erb: Shared logic for moving sp and pc. As you
can see from thsi file, when status->local_stack_p is TRUE and
insn.handles_frame? is false, moving sp is skipped. But if
insn.handles_frame? is true, values should be rolled back to VM's stack.
common.mk: add dependency for the file
_mjit_compile_insn_body.erb: Set sp value before canceling JIT on
DISPATCH_ORIGINAL_INSN. Replace GET_SP, STACK_ADDR_FROM_TOP, TOPN macros
for the case ocal_stack_p is TRUE and insn.handles_frame? is false.
In that case, values are not available on VM's stack and those macros
should be replaced.
mjit_compile.inc.erb: updated comments of macros which are supported by
JIT compiler. All references to `cfp->sp` should be replaced and thus
INC_SP, SET_SV, PUSH are no longer supported for now, because they are
not used now.
vm_exec.h: moved EXEC_EC_CFP definition to vm_insnhelper.h because it's
tighly coupled to CALL_METHOD.
vm_insnhelper.h: Have revised EXEC_EC_CFP definition moved from vm_exec.h.
Now it triggers mjit_exec for VM, and has the guard for catch_except_p
on JIT-ed code. See comments for details. CALL_METHOD delegates
triggering mjit_exec to EXEC_EC_CFP.
insns.def: Stopped using EXEC_EC_CFP for the case we don't want to
trigger mjit_exec. Those insns (defineclass, opt_call_c_function) are
not supported by JIT and it's safe to use RESTORE_REGS(), NEXT_INSN().
expandarray is changed to pass GET_SP() to replace the macro in
_mjit_compile_insn_body.erb.
vm_insnhelper.c: change to take sp for the above reason.
[close https://github.com/ruby/ruby/pull/1828]
This patch resurrects the performance which was attached in
[Feature #14235].
* Benchmark
Optcarrot (with configuration for benchmark_driver.gem)
https://github.com/benchmark-driver/optcarrot
$ benchmark-driver benchmark.yml --verbose 1 --rbenv 'before;before+JIT::before,--jit;after;after+JIT::after,--jit' --repeat-count 10
before: ruby 2.6.0dev (2018-03-04 trunk 62652) [x86_64-linux]
before+JIT: ruby 2.6.0dev (2018-03-04 trunk 62652) +JIT [x86_64-linux]
after: ruby 2.6.0dev (2018-03-04 local-variable.. 62652) [x86_64-linux]
last_commit=mjit_compile.c: use local variables for stack
after+JIT: ruby 2.6.0dev (2018-03-04 local-variable.. 62652) +JIT [x86_64-linux]
last_commit=mjit_compile.c: use local variables for stack
Calculating -------------------------------------
before before+JIT after after+JIT
optcarrot 53.552 59.680 53.697 63.358 fps
Comparison:
optcarrot
after+JIT: 63.4 fps
before+JIT: 59.7 fps - 1.06x slower
after: 53.7 fps - 1.18x slower
before: 53.6 fps - 1.18x slower
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62655 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* insns.def (getinlinecache): Qnil is a valid value as a constant.
this can be observable when accessing a deprecated constant
which is nil. non-nil constant is warned just once for each
location, but every time if it is nil.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62350 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
which has been developed by Takashi Kokubun <takashikkbn@gmail> as
YARV-MJIT. Many of its bugs are fixed by wanabe <s.wanabe@gmail.com>.
This JIT compiler is designed to be a safe migration path to introduce
JIT compiler to MRI. So this commit does not include any bytecode
changes or dynamic instruction modifications, which are done in original
MJIT.
This commit even strips off some aggressive optimizations from
YARV-MJIT, and thus it's slower than YARV-MJIT too. But it's still
fairly faster than Ruby 2.5 in some benchmarks (attached below).
Note that this JIT compiler passes `make test`, `make test-all`, `make
test-spec` without JIT, and even with JIT. Not only it's perfectly safe
with JIT disabled because it does not replace VM instructions unlike
MJIT, but also with JIT enabled it stably runs Ruby applications
including Rails applications.
I'm expecting this version as just "initial" JIT compiler. I have many
optimization ideas which are skipped for initial merging, and you may
easily replace this JIT compiler with a faster one by just replacing
mjit_compile.c. `mjit_compile` interface is designed for the purpose.
common.mk: update dependencies for mjit_compile.c.
internal.h: declare `rb_vm_insn_addr2insn` for MJIT.
vm.c: exclude some definitions if `-DMJIT_HEADER` is provided to
compiler. This avoids to include some functions which take a long time
to compile, e.g. vm_exec_core. Some of the purpose is achieved in
transform_mjit_header.rb (see `IGNORED_FUNCTIONS`) but others are
manually resolved for now. Load mjit_helper.h for MJIT header.
mjit_helper.h: New. This is a file used only by JIT-ed code. I'll
refactor `mjit_call_cfunc` later.
vm_eval.c: add some #ifdef switches to skip compiling some functions
like Init_vm_eval.
win32/mkexports.rb: export thread/ec functions, which are used by MJIT.
include/ruby/defines.h: add MJIT_FUNC_EXPORTED macro alis to clarify
that a function is exported only for MJIT.
array.c: export a function used by MJIT.
bignum.c: ditto.
class.c: ditto.
compile.c: ditto.
error.c: ditto.
gc.c: ditto.
hash.c: ditto.
iseq.c: ditto.
numeric.c: ditto.
object.c: ditto.
proc.c: ditto.
re.c: ditto.
st.c: ditto.
string.c: ditto.
thread.c: ditto.
variable.c: ditto.
vm_backtrace.c: ditto.
vm_insnhelper.c: ditto.
vm_method.c: ditto.
I would like to improve maintainability of function exports, but I
believe this way is acceptable as initial merging if we clarify the
new exports are for MJIT (so that we can use them as TODO list to fix)
and add unit tests to detect unresolved symbols.
I'll add unit tests of JIT compilations in succeeding commits.
Author: Takashi Kokubun <takashikkbn@gmail.com>
Contributor: wanabe <s.wanabe@gmail.com>
Part of [Feature #14235]
---
* Known issues
* Code generated by gcc is faster than clang. The benchmark may be worse
in macOS. Following benchmark result is provided by gcc w/ Linux.
* Performance is decreased when Google Chrome is running
* JIT can work on MinGW, but it doesn't improve performance at least
in short running benchmark.
* Currently it doesn't perform well with Rails. We'll try to fix this
before release.
---
* Benchmark reslts
Benchmarked with:
Intel 4.0GHz i7-4790K with 16GB memory under x86-64 Ubuntu 8 Cores
- 2.0.0-p0: Ruby 2.0.0-p0
- r62186: Ruby trunk (early 2.6.0), before MJIT changes
- JIT off: On this commit, but without `--jit` option
- JIT on: On this commit, and with `--jit` option
** Optcarrot fps
Benchmark: https://github.com/mame/optcarrot
| |2.0.0-p0 |r62186 |JIT off |JIT on |
|:--------|:--------|:--------|:--------|:--------|
|fps |37.32 |51.46 |51.31 |58.88 |
|vs 2.0.0 |1.00x |1.38x |1.37x |1.58x |
** MJIT benchmarks
Benchmark: https://github.com/benchmark-driver/mjit-benchmarks
(Original: https://github.com/vnmakarov/ruby/tree/rtl_mjit_branch/MJIT-benchmarks)
| |2.0.0-p0 |r62186 |JIT off |JIT on |
|:----------|:--------|:--------|:--------|:--------|
|aread |1.00 |1.09 |1.07 |2.19 |
|aref |1.00 |1.13 |1.11 |2.22 |
|aset |1.00 |1.50 |1.45 |2.64 |
|awrite |1.00 |1.17 |1.13 |2.20 |
|call |1.00 |1.29 |1.26 |2.02 |
|const2 |1.00 |1.10 |1.10 |2.19 |
|const |1.00 |1.11 |1.10 |2.19 |
|fannk |1.00 |1.04 |1.02 |1.00 |
|fib |1.00 |1.32 |1.31 |1.84 |
|ivread |1.00 |1.13 |1.12 |2.43 |
|ivwrite |1.00 |1.23 |1.21 |2.40 |
|mandelbrot |1.00 |1.13 |1.16 |1.28 |
|meteor |1.00 |2.97 |2.92 |3.17 |
|nbody |1.00 |1.17 |1.15 |1.49 |
|nest-ntimes|1.00 |1.22 |1.20 |1.39 |
|nest-while |1.00 |1.10 |1.10 |1.37 |
|norm |1.00 |1.18 |1.16 |1.24 |
|nsvb |1.00 |1.16 |1.16 |1.17 |
|red-black |1.00 |1.02 |0.99 |1.12 |
|sieve |1.00 |1.30 |1.28 |1.62 |
|trees |1.00 |1.14 |1.13 |1.19 |
|while |1.00 |1.12 |1.11 |2.41 |
** Discourse's script/bench.rb
Benchmark: https://github.com/discourse/discourse/blob/v1.8.7/script/bench.rb
NOTE: Rails performance was somehow a little degraded with JIT for now.
We should fix this.
(At least I know opt_aref is performing badly in JIT and I have an idea
to fix it. Please wait for the fix.)
*** JIT off
Your Results: (note for timings- percentile is first, duration is second in millisecs)
categories_admin:
50: 17
75: 18
90: 22
99: 29
home_admin:
50: 21
75: 21
90: 27
99: 40
topic_admin:
50: 17
75: 18
90: 22
99: 32
categories:
50: 35
75: 41
90: 43
99: 77
home:
50: 39
75: 46
90: 49
99: 95
topic:
50: 46
75: 52
90: 56
99: 101
*** JIT on
Your Results: (note for timings- percentile is first, duration is second in millisecs)
categories_admin:
50: 19
75: 21
90: 25
99: 33
home_admin:
50: 24
75: 26
90: 30
99: 35
topic_admin:
50: 19
75: 20
90: 25
99: 30
categories:
50: 40
75: 44
90: 48
99: 76
home:
50: 42
75: 48
90: 51
99: 89
topic:
50: 49
75: 55
90: 58
99: 99
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62197 b2dd03c8-39d4-4d8f-98ff-823fe69b080e