The following code crashes because the GC ran during finalizers will
cause T_ZOMBIE objects to be on the heap, which crashes when we call
rb_gc_obj_free on it:
raise_proc = proc do |id|
GC.start
end
1000.times do
ObjectSpace.define_finalizer(Object.new, raise_proc)
end
When loading a non-ASCII compatible file, an error is raised which
causes memory leak.
For example:
require "tempfile"
Tempfile.create do |f|
f.write("# -*- coding: UTF-16BE -*-")
f.flush
10.times do
20_000.times do
begin
load(f.path)
rescue
end
end
puts `ps -o rss= -p #{$$}`
end
end
Before:
33904
49072
64528
79216
94576
109504
124768
139536
154928
170256
After:
19568
21296
21664
21728
22192
22256
22416
22272
22272
22272
This commit introduce `RubyVM::AbstractSyntaxTree::Node#locations` method
and `RubyVM::AbstractSyntaxTree::Location` class.
Ruby AST node will hold multiple locations information.
`RubyVM::AbstractSyntaxTree::Node#locations` provides a way to access
these locations information.
`RubyVM::AbstractSyntaxTree::Location` is a class which holds these location information:
* `#first_lineno`
* `#first_column`
* `#last_lineno`
* `#last_column`
[Bug #20641] `Gem::BUNDLED_GEMS.warning?` adds a lot of extra
work on top of `require`. When the call end up atually loading code
the overhead is somewhat marginal.
However it's not uncommon for code to go some late `require` in some
paths, so it's expected that calling `require` with something already
required is somewhat fast, and `bundled_gems.rb` breaks this assumption.
To avoid this, we can have a fast path that in most case allow to
short-circuit all the heavy computations. If we extract the feature
basename and it doesn't match any of the bundled gems we care about
we can return very early.
With this change `require 'date'` is now only 1.33x slower on Ruby
3.3.3, than it was on Ruby 3.2.2, whereas before this change it
was at least 100x slower.
Since 730e3b2ce0
("Stop exposing `rb_str_chilled_p`"), we noticed a speed loss on a few
benchmarks that are string operations heavy. This is partially due to
routines no longer having the options to inline rb_check_frozen_inline()
in non-LTO builds. Make it an inlining candidate again to recover speed.
Testing this patch on my machine, the fannkuchredux benchmark gets a
1.15 speed-up with YJIT and 1.03 without YJIT.
When calling a method that does not accept a positional splat
parameter with a splatted array with a ruby2_keywords flagged hash,
there is no need to duplicate the splatted array. Previously,
Ruby would duplicate the splatted array and potentially modify
it before flattening it to the VM stack
Use a similar approach as the f(*ary, **hash) optimization,
flattening the splatted array to the VM stack without modifying
it, and make any modifications needed to the VM stack.
When calling a method that accepts keywords but not a keyword
splat with a splatted array with a ruby2_keywords flagged hash,
there is no need to duplicate the ruby2_keywords flagged hash,
since it will be accessed to get the keyword values, but it will
not be modified.
This avoids an array allocation when calling a method that does
not accept a positional splat or keywords with both a positional
splat and keywords. Previously, Ruby would dup the positional
splat to append the keyword splat to it. Then it would flatten
the dupped positional splat array to the VM stack.
This flattens the given positional splat to the VM stack, then
adds the keyword splat hash after the last positional splat
element on the VM stack, avoiding the need to modify
the positional splat array.
The `f(arg, *arg, **arg, **arg)` case was previously not optimized.
The optimizer didn't optimize this case because of the multiple
keyword splats, and the compiler didn't optimize it because the
`f(*arg, **arg, **arg)` optimization added in
0ee3960685 didn't apply.
I found it difficult to apply this optimization without changing
the `setup_args_core` API, since by the time you get to the ARGSCAT
case, you don't know whether you were called recursively or directly,
so I'm not sure if it was possible to know at that point whether the
array allocation could be avoided.
This changes the dup_rest argument in `setup_args_core` from an int
to a pointer to int. This allows us to track whether we have allocated
a caller side array for multiple splats or splat+post across
recursive calls. Check the pointed value (*dup_rest) to determine the
`splatarray` argument. If dup_rest is 1, then use `splatarray true`
(caller-side array allocation), then set *dup_rest back to 0, ensuring
only a single `splatarray true` per method call.
Before calling `setup_args_core`, check whether the array allocation
can be avoided safely using `splatarray false`. Optimizable cases are:
```
// f(*arg)
SPLAT
// f(1, *arg)
ARGSCAT
LIST
// f(*arg, **arg)
ARGSPUSH
SPLAT
HASH nd_brace=0
// f(1, *arg, **arg)
ARGSPUSH
ARGSCAT
LIST
HASH nd_brace=0
```
If so, dup_rest is set to 0 instead of 1 to avoid the allocation.
After calling `setup_args_core`, check the flag. If the flag
includes `VM_CALL_ARGS_SPLAT`, and the pointed value has changed,
indicating `splatarray true` was used, then also set
`VM_CALL_ARGS_SPLAT_MUT` in the flag.
My initial attempt at this broke the `f(*ary, &ary.pop)` test,
because we were not duplicating the ary in the splat even though
it was modified later (evaluation order issue). The initial attempt
would also break `f(*ary, **ary.pop)` or `f(*ary, kw: ary.pop)` cases
for the same reason. I added test cases for those evaluation
order issues.
Add setup_args_dup_rest_p static function that checks that a given
node is safe. Call that on the block pass node to determine if
the block pass node is safe. Also call it on each of the hash
key/value nodes to test that they are safe. If any are not safe,
then set dup_rest = 1 so that `splatarray true` will be used to
avoid the evaluation order issue.
This new approach has the affect of optimizing most cases of
literal keywords after positional splats. Previously, only
static keyword hashes after positional splats avoided array
allocation for the splat. Now, most dynamic keyword hashes
after positional splats also avoid array allocation.
Add allocation tests for dynamic keyword keyword hashes after
positional splats.
setup_args_dup_rest_p is currently fairly conservative. It
could definitely be expanded to handle additional node types
to reduce allocations in additional cases.
The M:N threading stack cleanup machinery tries to call MADV_FREE on the native
thread's stack, and calls rb_bug if it fails. Unfortunately, there's no way to
distinguish between "You passed bad parameters to madvise" and "MADV_FREE is
not supported on the kernel you are running on"; both cases just return EINVAL.
This means that if you have a Ruby on a system that was built on a system with
MADV_FREE and run it on a system without it, you get a crash in nt_free_stack.
I ran into this because rr actually emulates MADV_FREE by just returning EINVAL
and pretending it's not supported (since it can otherwise introduce
nondeterministic behaviour). So if you run bootstraptest/test_ractor.rb under
rr, you get this crash.
I think we should just get rid of the error handling here; freeing memory like
this is strictly optional anyway.
[Bug #20632]