Previously, backtrace_each fully populated the rb_backtrace_t with all
backtrace frames, even if caller only requested a partial backtrace
(e.g. Kernel#caller_locations(1, 1)). This changes backtrace_each to
only add the requested frames to the rb_backtrace_t.
To do this, backtrace_each needs to be passed the starting frame and
number of frames values passed to Kernel#caller or #caller_locations.
backtrace_each works from the top of the stack to the bottom, where the
bottom is the current frame. Due to how the location for cfuncs is
tracked using the location of the previous iseq, we need to store an
extra frame for the previous iseq if we are limiting the backtrace and
final backtrace frame (the first one stored) would be a cfunc and not
an iseq.
To limit the amount of work in this case, while scanning until the start
of the requested backtrace, for each iseq, store the cfp. If the first
backtrace frame we care about is a cfunc, use the stored cfp to find the
related iseq. Use a function pointer to handle the storage of the cfp
in the iteration arg, and also store the location of the extra frame
in the iteration arg.
backtrace_each needs to return int instead of void in order to signal
when a starting frame larger than backtrace size is given, as caller
and caller_locations needs to return nil and not the empty array in
these cases.
To handle cases where a range is provided with a negative end, and the
backtrace size is needed to calculate the result to pass to
rb_range_beg_len, add a backtrace_size static function to calculate
the size, which copies the logic from backtrace_each.
As backtrace_each only adds the backtrace lines requested,
backtrace_to_*_ary can be simplified to always operate on the entire
backtrace.
Previously, caller_locations(1,1) was about 6.2 times slower for an
800 deep callstack compared to an empty callstack. With this new
approach, it is only 1.3 times slower. It will always be somewhat
slower as it still needs to scan the cfps from the top of the stack
until it finds the first requested backtrace frame.
This initializes the backtrace memory to zero. I do not think this is
necessary, as from my analysis, nothing during the setting of the
backtrace entries can cause a garbage collection, but it seems the
safest approach, and it's unlikely the performance decrease is
significant.
This removes the rb_backtrace_t backtrace_base member. backtrace
and backtrace_base were initialized to the same value, and neither
is modified, so it doesn't make sense to have two pointers.
This also removes LOCATION_TYPE_IFUNC from vm_backtrace.c, as
the value is never set.
Fixes [Bug #17031]
Previously, backtrace_each fully populated the rb_backtrace_t with all
backtrace frames, even if caller only requested a partial backtrace
(e.g. Kernel#caller_locations(1, 1)). This changes backtrace_each to
only add the requested frames to the rb_backtrace_t.
To do this, backtrace_each needs to be passed the starting frame and
number of frames values passed to Kernel#caller or #caller_locations.
backtrace_each works from the top of the stack to the bottom, where the
bottom is the current frame. Due to how the location for cfuncs is
tracked using the location of the previous iseq, we need to store an
extra frame for the previous iseq if we are limiting the backtrace and
final backtrace frame (the first one stored) would be a cfunc and not
an iseq.
To limit the amount of work in this case, while scanning until the start
of the requested backtrace, for each iseq, store the cfp. If the first
backtrace frame we care about is a cfunc, use the stored cfp to find the
related iseq. Use a function pointer to handle the storage of the cfp
in the iteration arg, and also store the location of the extra frame
in the iteration arg.
backtrace_each needs to return int instead of void in order to signal
when a starting frame larger than backtrace size is given, as caller
and caller_locations needs to return nil and not the empty array in
these cases.
To handle cases where a range is provided with a negative end, and the
backtrace size is needed to calculate the result to pass to
rb_range_beg_len, add a backtrace_size static function to calculate
the size, which copies the logic from backtrace_each.
As backtrace_each only adds the backtrace lines requested,
backtrace_to_*_ary can be simplified to always operate on the entire
backtrace.
Previously, caller_locations(1,1) was about 6.2 times slower for an
800 deep callstack compared to an empty callstack. With this new
approach, it is only 1.3 times slower. It will always be somewhat
slower as it still needs to scan the cfps from the top of the stack
until it finds the first requested backtrace frame.
Fixes [Bug #17031]
Right now `SomeClass.method` is properly named, but `SomeModule.method`
is displayed as `#<Module:0x000055eb5d95adc8>.method` which makes
profiling annoying.
Saves comitters' daily life by avoid #include-ing everything from
internal.h to make each file do so instead. This would significantly
speed up incremental builds.
We take the following inclusion order in this changeset:
1. "ruby/config.h", where _GNU_SOURCE is defined (must be the very
first thing among everything).
2. RUBY_EXTCONF_H if any.
3. Standard C headers, sorted alphabetically.
4. Other system headers, maybe guarded by #ifdef
5. Everything else, sorted alphabetically.
Exceptions are those win32-related headers, which tend not be self-
containing (headers have inclusion order dependencies).
This changeset basically replaces `ruby_xmalloc(x * y)` into
`ruby_xmalloc2(x, y)`. Some convenient functions are also
provided for instance `rb_xmalloc_mul_add(x, y, z)` which allocates
x * y + z byes.
This reverts commits: 10d6a3aca78ba48c1b85fba8627dc1dd883de5ba6c6a25feca167e6b48f17cb96d41a53207979278595b3c4fdd1521f7cf89c11c5e69accf336082033632a812c0f56506be0d86427a3219 .
The reason for the revert is that we observe ABA problem around
inline method cache. When a cache misshits, we search for a
method entry. And if the entry is identical to what was cached
before, we reuse the cache. But the commits we are reverting here
introduced situations where a method entry is freed, then the
identical memory region is used for another method entry. An
inline method cache cannot detect that ABA.
Here is a code that reproduce such situation:
```ruby
require 'prime'
class << Integer
alias org_sqrt sqrt
def sqrt(n)
raise
end
GC.stress = true
Prime.each(7*37){} rescue nil # <- Here we populate CC
class << Object.new; end
# These adjacent remove-then-alias maneuver
# frees a method entry, then immediately
# reuses it for another.
remove_method :sqrt
alias sqrt org_sqrt
end
Prime.each(7*37).to_a # <- SEGV
```
Now that we have eliminated most destructive operations over the
rb_method_entry_t / rb_callable_method_entry_t, let's make them
mostly immutabe and mark them const.
One exception is rb_export_method(), which destructively modifies
visibilities of method entries. I have left that operation as is
because I suspect that destructiveness is the nature of that
function.
We can check the function pointer passed to rb_define_global_function
like we do so in rb_define_method. It turns out that almost anybody
is misunderstanding the API.
lineno is an int, and INT2FIX(0) was assigned.
[Bug #15719] [ruby-core:91911]
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67326 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* vm_backtrace.c (rb_debug_inspector_open): escape all env using
`rb_vm_stack_to_heap()` before making bindings.
[Bug #15105]
There is a complicated story of this issue:
Without this patch, IFUNC frame does not escaped. A IFUNC frame
points to CFUNC ep as previous ep. However, CFUNC ep can be escaped
because of making bindings of Ruby level frames.
IFUNC's ep can points to invalidated ep and `rb_iter_break()` will
fail. This is why `any?` fails.
* test/-ext-/debug/test_debug.rb: add a test.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64800 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
This reverts commit r63265.
ko1 said I should not have committed this! I'm sorry!
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63267 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
rb_profile_frames was always behaving as if the value given for the
start parameter was 0.
The reason for this was that it would check if (start > 0) { then
continue without updating the control frame pointer or anything other
than decrementing start.
[ruby-core:86147] [Bug #14607]
Co-authored-by: Dylan Thacker-Smith <Dylan.Smith@shopify.com>
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63265 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
which has been developed by Takashi Kokubun <takashikkbn@gmail> as
YARV-MJIT. Many of its bugs are fixed by wanabe <s.wanabe@gmail.com>.
This JIT compiler is designed to be a safe migration path to introduce
JIT compiler to MRI. So this commit does not include any bytecode
changes or dynamic instruction modifications, which are done in original
MJIT.
This commit even strips off some aggressive optimizations from
YARV-MJIT, and thus it's slower than YARV-MJIT too. But it's still
fairly faster than Ruby 2.5 in some benchmarks (attached below).
Note that this JIT compiler passes `make test`, `make test-all`, `make
test-spec` without JIT, and even with JIT. Not only it's perfectly safe
with JIT disabled because it does not replace VM instructions unlike
MJIT, but also with JIT enabled it stably runs Ruby applications
including Rails applications.
I'm expecting this version as just "initial" JIT compiler. I have many
optimization ideas which are skipped for initial merging, and you may
easily replace this JIT compiler with a faster one by just replacing
mjit_compile.c. `mjit_compile` interface is designed for the purpose.
common.mk: update dependencies for mjit_compile.c.
internal.h: declare `rb_vm_insn_addr2insn` for MJIT.
vm.c: exclude some definitions if `-DMJIT_HEADER` is provided to
compiler. This avoids to include some functions which take a long time
to compile, e.g. vm_exec_core. Some of the purpose is achieved in
transform_mjit_header.rb (see `IGNORED_FUNCTIONS`) but others are
manually resolved for now. Load mjit_helper.h for MJIT header.
mjit_helper.h: New. This is a file used only by JIT-ed code. I'll
refactor `mjit_call_cfunc` later.
vm_eval.c: add some #ifdef switches to skip compiling some functions
like Init_vm_eval.
win32/mkexports.rb: export thread/ec functions, which are used by MJIT.
include/ruby/defines.h: add MJIT_FUNC_EXPORTED macro alis to clarify
that a function is exported only for MJIT.
array.c: export a function used by MJIT.
bignum.c: ditto.
class.c: ditto.
compile.c: ditto.
error.c: ditto.
gc.c: ditto.
hash.c: ditto.
iseq.c: ditto.
numeric.c: ditto.
object.c: ditto.
proc.c: ditto.
re.c: ditto.
st.c: ditto.
string.c: ditto.
thread.c: ditto.
variable.c: ditto.
vm_backtrace.c: ditto.
vm_insnhelper.c: ditto.
vm_method.c: ditto.
I would like to improve maintainability of function exports, but I
believe this way is acceptable as initial merging if we clarify the
new exports are for MJIT (so that we can use them as TODO list to fix)
and add unit tests to detect unresolved symbols.
I'll add unit tests of JIT compilations in succeeding commits.
Author: Takashi Kokubun <takashikkbn@gmail.com>
Contributor: wanabe <s.wanabe@gmail.com>
Part of [Feature #14235]
---
* Known issues
* Code generated by gcc is faster than clang. The benchmark may be worse
in macOS. Following benchmark result is provided by gcc w/ Linux.
* Performance is decreased when Google Chrome is running
* JIT can work on MinGW, but it doesn't improve performance at least
in short running benchmark.
* Currently it doesn't perform well with Rails. We'll try to fix this
before release.
---
* Benchmark reslts
Benchmarked with:
Intel 4.0GHz i7-4790K with 16GB memory under x86-64 Ubuntu 8 Cores
- 2.0.0-p0: Ruby 2.0.0-p0
- r62186: Ruby trunk (early 2.6.0), before MJIT changes
- JIT off: On this commit, but without `--jit` option
- JIT on: On this commit, and with `--jit` option
** Optcarrot fps
Benchmark: https://github.com/mame/optcarrot
| |2.0.0-p0 |r62186 |JIT off |JIT on |
|:--------|:--------|:--------|:--------|:--------|
|fps |37.32 |51.46 |51.31 |58.88 |
|vs 2.0.0 |1.00x |1.38x |1.37x |1.58x |
** MJIT benchmarks
Benchmark: https://github.com/benchmark-driver/mjit-benchmarks
(Original: https://github.com/vnmakarov/ruby/tree/rtl_mjit_branch/MJIT-benchmarks)
| |2.0.0-p0 |r62186 |JIT off |JIT on |
|:----------|:--------|:--------|:--------|:--------|
|aread |1.00 |1.09 |1.07 |2.19 |
|aref |1.00 |1.13 |1.11 |2.22 |
|aset |1.00 |1.50 |1.45 |2.64 |
|awrite |1.00 |1.17 |1.13 |2.20 |
|call |1.00 |1.29 |1.26 |2.02 |
|const2 |1.00 |1.10 |1.10 |2.19 |
|const |1.00 |1.11 |1.10 |2.19 |
|fannk |1.00 |1.04 |1.02 |1.00 |
|fib |1.00 |1.32 |1.31 |1.84 |
|ivread |1.00 |1.13 |1.12 |2.43 |
|ivwrite |1.00 |1.23 |1.21 |2.40 |
|mandelbrot |1.00 |1.13 |1.16 |1.28 |
|meteor |1.00 |2.97 |2.92 |3.17 |
|nbody |1.00 |1.17 |1.15 |1.49 |
|nest-ntimes|1.00 |1.22 |1.20 |1.39 |
|nest-while |1.00 |1.10 |1.10 |1.37 |
|norm |1.00 |1.18 |1.16 |1.24 |
|nsvb |1.00 |1.16 |1.16 |1.17 |
|red-black |1.00 |1.02 |0.99 |1.12 |
|sieve |1.00 |1.30 |1.28 |1.62 |
|trees |1.00 |1.14 |1.13 |1.19 |
|while |1.00 |1.12 |1.11 |2.41 |
** Discourse's script/bench.rb
Benchmark: https://github.com/discourse/discourse/blob/v1.8.7/script/bench.rb
NOTE: Rails performance was somehow a little degraded with JIT for now.
We should fix this.
(At least I know opt_aref is performing badly in JIT and I have an idea
to fix it. Please wait for the fix.)
*** JIT off
Your Results: (note for timings- percentile is first, duration is second in millisecs)
categories_admin:
50: 17
75: 18
90: 22
99: 29
home_admin:
50: 21
75: 21
90: 27
99: 40
topic_admin:
50: 17
75: 18
90: 22
99: 32
categories:
50: 35
75: 41
90: 43
99: 77
home:
50: 39
75: 46
90: 49
99: 95
topic:
50: 46
75: 52
90: 56
99: 101
*** JIT on
Your Results: (note for timings- percentile is first, duration is second in millisecs)
categories_admin:
50: 19
75: 21
90: 25
99: 33
home_admin:
50: 24
75: 26
90: 30
99: 35
topic_admin:
50: 19
75: 20
90: 25
99: 30
categories:
50: 40
75: 44
90: 48
99: 76
home:
50: 42
75: 48
90: 51
99: 89
topic:
50: 49
75: 55
90: 58
99: 99
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62197 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* vm_backtrace.c (rb_ec_backtrace_location_ary): make it static and
remove `rb_` prefix.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@60809 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* tool/instruction.rb: create `trace_` prefix instructions.
* compile.c (ADD_TRACE): do not add `trace` instructions but add
TRACE link elements. TRACE elements will be unified with a next
instruction as instruction information.
* vm_trace.c (update_global_event_hook): modify all ISeqs when
hooks are enabled.
* iseq.c (rb_iseq_trace_set): added to toggle `trace_` instructions.
* vm_insnhelper.c (vm_trace): added.
This function is a body of `trace_` prefix instructions.
* vm_insnhelper.h (JUMP): save PC to a control frame.
* insns.def (trace): removed.
* vm_exec.h (INSN_ENTRY_SIG): add debug output (disabled).
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@60763 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* iseq.c (find_line_no): renamed to rb_iseq_line_no().
* vm_backtrace.c (calc_lineno): add a comment why we need to use "pos-1".
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@60733 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* vm_backtrace.c (rb_backtrace_use_iseq_first_lineno_for_last_location):
added. It modifies last location's line as corresponding iseq's first line
number.
* vm_args.c (raise_argument_error): use added function.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@60725 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
to represent execution context [Feature #14038]
* vm_core.h (rb_thread_t): rb_thread_t::ec is now a pointer.
There are many code using `th` to represent execution context
(such as cfp, VM stack and so on). To access `ec`, they need to
use `th->ec->...` (adding one indirection) so that we need to
replace them by passing `ec` instead of `th`.
* vm_core.h (GET_EC()): introduced to access current ec. Also
remove `ruby_current_thread` global variable.
* cont.c (rb_context_t): introduce rb_context_t::thread_ptr instead of
rb_context_t::thread_value.
* cont.c (ec_set_vm_stack): added to update vm_stack explicitly.
* cont.c (ec_switch): added to switch ec explicitly.
* cont.c (rb_fiber_close): added to terminate fibers explicitly.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@60440 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* vm_core.h (rb_thread_ptr): added to replace GetThreadPtr() macro.
* thread.c (in some functions: use "target_th" instead of "th" to make clear
that it is not a current thread.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@59192 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
Return value of EXEC_TAG() is saved by "int state".
Instead of "int", use "enum ruby_tag_type". First EXEC_TAG()
value should be 0, so that define TAG_NONE (= 0) and use it.
Some code used "status" instead of "state". To make them clear,
rename them to state.
We can change variable name from "state" to "tag_state", but this
ticket doesn't contain it.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@59155 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* vm_core.h: rename absolute_path to realpath because it is expected name.
external APIs (#absolute_path methods) are remained.
* vm_core.h: remove rb_iseq_location_struct::path and
rb_iseq_location_struct::absolute_path and introduce pathobj.
if given path equals to given absolute_path (and most of case
it is true), pathobj is simply given path String. If it is not same,
pathobj is Array and pathobj[0] is path and pathobj[1] is realpath.
This size optimization reduce 8 bytes and
sizeof(struct rb_iseq_constant_body) is 200 bytes -> 192 bytes
on 64bit CPU.
To support this change, the following functions are introduced:
* pathobj_path() (defined in vm_core.h)
* pathobj_realpath() (ditto)
* rb_iseq_path() (decl. in vm_core.h)
* rb_iseq_realpath() (ditto)
* rb_iseq_pathobj_new() (ditto)
* rb_iseq_pathobj_set() (ditto)
* vm_core.h (rb_binding_t): use pathobj instead of path. If binding
is given at eval methods, realpath (absolute_path) was caller's
realpath. However, they should use binding's realpath.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@58979 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
The goal is to reduce rb_context_t and rb_fiber_t size
by removing the need to store the entire rb_thread_t in
there.
[ruby-core:81045] Work-in-progress: soon, we will move more fields here.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@58614 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
* vm_backtrace.c (rb_threadptr_backtrace_object): rename and
extern.
* vm_backtrace.c (rb_threadptr_backtrace_str_ary): rename as
threadptr since the parameter is rb_thread_t*.
* vm_backtrace.c (rb_threadptr_backtrace_location_ary): ditto.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@58377 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
actually check its availability rather to check GCC's version.
* configure.in (WARN_UNUSED_RESULT): moved to here.
* configure.in (RUBY_FUNC_ATTRIBUTE): change function declaration
to return int rather than void, because it makes no sense for a
warn_unused_result attributed function to return void.
Funny thing however is that it also makes no sense for noreturn
attributed function to return int. So there is a fundamental
conflict between them. While I tested this, I confirmed both
GCC 6 and Clang 3.8 prefers int over void to correctly detect
necessary attributes under this setup. Maybe subject to change
in future.
* internal.h (UNINITIALIZED_VAR): renamed to MAYBE_UNUSED, then
moved to configure.in for the same reason we move
WARN_UNUSED_RESULT.
* configure.in (MAYBE_UNUSED): moved to here.
* internal.h (__has_attribute): deleted, because it has no use now.
* string.c (rb_str_enumerate_lines): refactor macro rename.
* string.c (rb_str_enumerate_bytes): ditto.
* string.c (rb_str_enumerate_chars): ditto.
* string.c (rb_str_enumerate_codepoints): ditto.
* thread.c (do_select): ditto.
* vm_backtrace.c (rb_debug_inspector_open): ditto.
* vsnprintf.c (BSD_vfprintf): ditto.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@56169 b2dd03c8-39d4-4d8f-98ff-823fe69b080e