Before, when requiring "bigdecimal/math" in a Bundler context:
> /Users/deivid/.asdf/installs/ruby/3.3.0-dev/lib/ruby/3.3.0+0/bigdecimal/math.rb:2: warning: bigdecimal was loaded from the standard library, but will no longer be part of the default gems since Ruby 3.4.0. Add bigdecimal to your Gemfile or gemspec.
After:
> foo.rb:1: warning: bigdecimal/math is found in bigdecimal, which will no longer be part of the default gems since Ruby 3.4.0. Add bigdecimal to your Gemfile or gemspec.
Before, when requiring "bigdecimal" in a Bundler context:
> foo.rb:1: warning: bigdecimal which will no longer be part of the default gems since Ruby 3.4.0. Add bigdecimal to your Gemfile or gemspec.
After:
> foo.rb:1: warning: bigdecimal was loaded from the standard library, but will no longer be part of the default gems since Ruby 3.4.0. Add bigdecimal to your Gemfile or gemspec.
So generate_index can be implemented with dependencies, such as the compact index
Took this approach from feedback in https://github.com/rubygems/rubygems/pull/6853
Running `gem generate_index` by default will use an installed rubygems-generate_index, or install and then use the command from the gem
Apply suggestions from code review
https://github.com/rubygems/rubygems/commit/fc1cb9bc9e
Co-authored-by: Hiroshi SHIBATA <hsbt@ruby-lang.org>
This patch introduces thread specific storage APIs
for tools which use `rb_internal_thread_event_hook` APIs.
* `rb_internal_thread_specific_key_create()` to create a tool specific
thread local storage key and allocate the storage if not available.
* `rb_internal_thread_specific_set()` sets a data to thread and tool
specific storage.
* `rb_internal_thread_specific_get()` gets a data in thread and tool
specific storage.
Note that `rb_internal_thread_specific_get|set(thread_val, key)`
can be called without GVL and safe for async signal and safe for
multi-threading (native threads). So you can call it in any internal
thread event hooks. Further more you can call it from other native
threads. Of course `thread_val` should be living while accessing the
data from this function.
Note that you should not forget to clean up the set data.
when the RUBY_FREE_ON_SHUTDOWN environment variable is set, manually free memory at shutdown.
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
Co-authored-by: Peter Zhu <peter@peterzhu.ca>
BreakNode, ReturnNode and NextNode all compile the ArgumentsNode
directly, but we weren't accounting for multiple arguments. If there
is more than one argument, we need to also emit a newarray
instruction to put the arguments onto the stack
These are similar to the f(1, *a, &lvar), f(*a, **kw, &lvar) and
f(*a, kw: 1, &lvar) optimizations, but they use getblockparamproxy
instruction instead of getlocal.
This also fixes the else style to be more similar to the surrounding
code.
For the following:
```
def f(*a); a end
p f(*a, kw: 3)
```
`setup_parameters_complex` pushes `{kw: 3}` onto `a`. This worked fine
back when `concatarray true` was used and `a` was already a copy. It
does not work correctly with the optimization to switch to
`concatarray false`. This duplicates the array on the callee side
in such a case.
This affects cases when passing a regular splat and a keyword splat
(or literal keywords) in a method call, where the method does not
accept keywords.
This allocation could probably be avoided, but doing so would
make `setup_parameters_complex` more complicated.
In cases where the compiler can detect the hash is static, it would
use duphash for the hash part. As the hash is static, there is no need
to allocate an array.
The compiler already eliminates the array allocation for
f(*a, &lvar) and f(*a, &@iv). If that is safe, then eliminating
it for f(*a, **lvar) and f(*a, **@iv) as the last commit did is
as safe, and eliminating it for f(*a, **lvar, &lvar) and
f(*a, **@iv, &@iv) is also as safe.
The compiler already eliminates the array allocation for
f(*a, &lvar) and f(*a, &@iv), and eliminating the array allocation
for keyword splat is as safe as eliminating it for block passes.
Due to how the compiler works, while f(*a, &lvar) and f(*a, &@iv)
do not allocate an array, but f(1, *a, &lvar) and f(1, *a, &@iv)
do. It's probably possible to fix this in the compiler, but
seems easiest to fix this in the peephole optimizer.
Eliminating this array allocation is as safe as the current
elimination of the array allocation for f(*a, &lvar) and
f(*a, &@iv).
Due to how the compiler works, while f(*a) does not allocate an
array f(1, *a) does. This is possible to fix in the compiler, but
the change is much more complex. This attempts to fix the issue
in a simpler way using the peephole optimizer.
Eliminating this array allocation is safe, since just as in the
f(*a) case, nothing else on the caller side can modify the array.
This follows the same approach used for attr_reader/attr_writer in
2d98593bf5, skipping the checking for
tracing after the first call using the call cache, and clearing the
call cache when tracing is turned on/off.
Fixes [Bug #18886]
Since Ruby 2.4, `return` is supported at toplevel. This makes `eval "return"`
also supported at toplevel.
This mostly uses the same tests as direct `return` at toplevel, with a couple
differences:
`END {return if false}` is a SyntaxError, but `END {eval "return" if false}`
is not an error since the eval is never executed. `END {return}` is a
SyntaxError, but `END {eval "return"}` is a LocalJumpError.
The following is a SyntaxError:
```ruby
class X
nil&defined?0--begin e=no_method_error(); return; 0;end
end
```
However, the following is not, because the eval is never executed:
```ruby
class X
nil&defined?0--begin e=no_method_error(); eval "return"; 0;end
end
```
Fixes [Bug #19779]
In ae0ceafb0c0d05cc80623b525070255e3abb34ef ruby_dtoa was switched to
use malloc instead of xmalloc, which means that consumers should be
using free instead of xfree. Otherwise we will artificially shrink
oldmalloc_increase_bytes.
st.c redefines malloc and free to be ruby_xmalloc and ruby_xfree, so
when this was copied into hash.c it ended up mismatching an xmalloc with
a regular free, which ended up inflating oldmalloc_increase_bytes when
hashes were freed by minor GC.
Since Ruby 3.0, Ruby has passed a keyword splat as a regular
argument in the case of a call to a Ruby method where the
method does not accept keyword arguments, if the method
call does not contain an argument splat:
```ruby
def self.f(obj) obj end
def self.fs(*obj) obj[0] end
h = {a: 1}
f(**h).equal?(h) # Before: true; After: false
fs(**h).equal?(h) # Before: true; After: false
a = []
f(*a, **h).equal?(h) # Before and After: false
fs(*a, **h).equal?(h) # Before and After: false
```
The fact that the behavior differs when passing an empty
argument splat makes it obvious that something is not
working the way it is intended. Ruby 2 always copied
the keyword splat hash, and that is the expected behavior
in Ruby 3.
This bug is because of a missed check in setup_parameters_complex.
If the keyword splat passed is not mutable, then it points to
an existing object and not a new object, and therefore it must
be copied.
Now, there are 3 specs for the broken behavior of directly
using the keyword splatted hash. Fix two specs and add a
new version guard. Do not keep the specs for the broken
behavior for earlier Ruby versions, in case this fix is
backported. For the ruby2_keywords spec, just remove the
related line, since that line is unrelated to what the
spec is testing.
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
`src_ep[VM_ENV_DATA_INDEX_ME_CREF]` was read out and held without
marking across the allocation in vm_env_new(). In case vm_env_new() ran
compaction, an invalid reference could have been written into
`copied_env`.
It might've been hard to actually produce a crash with this issue due to
the pinning marking of the field in rb_execution_context_mark().
Previously, the following crashed with
`vm_assert_env:imemo_type_p(obj, imemo_env)` due to missing a missing
WB:
o = Object.new
def o.foo(n)
freeze
GC.stress = 1
# inflate block nesting get an imemo_env for each level
n.tap do |i|
i.tap do |local|
return Ractor.make_shareable(-> do
local + i + n
end)
end
end
ensure
GC.stress = false
GC.verify_internal_consistency
end
p o.foo(1)[]
By the time the recursive env_copy() call returns, `copied_env` could
have aged or have turned greyed, so we need a WB for the
`ep[VM_ENV_DATA_INDEX_SPECVAL]` assignment which adds an edge.
Fix: 674eb7df7f
need_major_gc is set when a major GC is required. However, if
gc_stress_no_major is also set, then it will not actually run a major
GC.
For example, the following script will sometimes crash:
```
GC.stress = 1
50000.times.map { [] }
```
With the following message:
```
[BUG] cannot create a new page after major GC
```
The intention of GC.verify_compaction_references is, I believe, to force
every single movable object to be moved, so that it's possible to debug
native extensions which not correctly updating their references to
objects they mark as movable.
To do this, it doubles the number of allocated pages for each size pool,
and sorts the heap pages so that the free ones are swept first; thus,
every object in an old page should be moved into a free slot in one of
the new pages.
This worked fine until movement of objects _between_ size pools during
compaction was implemented. That causes some problems for
verify_compaction_references:
* We were doubling the number of pages in each size pool, but actually
if some objects need to move into a _different_ pool, there's no
guarantee that they'll be enough room in that one.
* It's possible for the sweep & compact cursors to meet in one size pool
before all the objects that want to move into that size pool from
another are processed by the compaction.
You can see these problems by changing some of the movement tests in
test_gc_compact.rb to try and move e.g. 50,000 objects instead of
500; the test is not able to actually move all of the objects in a
single compaction run.
To fix this, we do two things in verify_compaction_references:
* Firstly, we add enough pages to every size pool to make them the same
size. This ensures that their compact cursors will all have space to
move during compaction (even if that means empty pages are
pointlessly compacted)
* Then, we examine every object and determine where it _wants_ to be
compacted into. We use this information to add additional pages to
each size pool to handle all objects which should live there.
With these two changes, we can move arbitrary amounts of objects into
the correct size pool in a single call to verify_compaction_references.
My _motivation_ for performing this work was to try and fix some test
stability issues in test_gc_compact.rb. I now no longer think that we
actually see this particular bug in rubyci.org, but I also think
verify_compaction_references should do what it says on the tin, so it's
worth keeping.
[Bug #20022]
This works like objspace_each_obj, except instead of being called with
the start & end address of each page, it's called with the page
structure itself.
[Bug #20022]