This fixes various issues when a module is included in or prepended
to a module or class, and then refined, or refined and then included
or prepended to a module or class.
Implement by renaming ensure_origin to rb_ensure_origin, making it
non-static, and calling it when refining a module.
Fix Module#initialize_copy to handle origins correctly. Previously,
Module#initialize_copy did not handle origins correctly. For example,
this code:
```ruby
module B; end
class A
def b; 2 end
prepend B
end
a = A.dup.new
class A
def b; 1 end
end
p a.b
```
Printed 1 instead of 2. This is because the super chain for
a.singleton_class was:
```
a.singleton_class
A.dup
B(iclass)
B(iclass origin)
A(origin) # not A.dup(origin)
```
The B iclasses would not be modified, so the includer entry would be
still be set to A and not A.dup.
This modifies things so that if the class/module has an origin,
all iclasses between the class/module and the origin are duplicated
and have the correct includer entry set, and the correct origin
is created.
This requires other changes to make sure all tests still pass:
* rb_undef_methods_from doesn't automatically handle classes with
origins, so pass it the origin for Comparable when undefing
methods in Complex. This fixed a failure in the Complex tests.
* When adding a method, the method cache was not cleared
correctly if klass has an origin. Clear the method cache for
the klass before switching to the origin of klass. This fixed
failures in the autoload tests related to overridding require,
without breaking the optimization tests. Also clear the method
cache for both the module and origin when removing a method.
* Module#include? is fixed to skip origin iclasses.
* Refinements are fixed to use the origin class of the module that
has an origin.
* RCLASS_REFINED_BY_ANY is removed as it was only used in a single
place and is no longer needed.
* Marshal#dump is fixed to skip iclass origins.
* rb_method_entry_make is fixed to handled overridden optimized
methods for modules that have origins.
Fixes [Bug #16852]
VM stack could overflow here. The condition is when a symbol is passed
to a block-taking method via &variable, and that symbol has never been
used for actual method names (thus yielding that results in calling
method_missing), and the VM stack is full (no single word left). This
is a once-in-a-blue-moon event. Yet there is a very tiny room of stack
overflow. We need to check that.
This commit changes the number of calls of MEMCPY from...
| send | &:sym
-------------------------|-------|-------
Symbol already interned | once | twice
Symbol not pinned yet | none | once
to:
| send | &:sym
-------------------------|-------|-------
Symbol already interned | once | none
Symbol not pinned yet | twice | once
So it sacrifices exceptional situation for normal path.
Symbol#to_proc and Object#send are closely related each other. Why not
share their implementations. By doing so we can skip recursive call of
vm_exec(), which could benefit for speed.
This changeset slightly speeds up on my machine.
Calculating -------------------------------------
before after
Optcarrot Lan_Master.nes 38.33488426546287 40.89825082589147 fps
40.91288557922081 41.48687465359386
40.96591995270991 41.98499064664184
41.20461943032173 43.67314690779162
42.38344888176518 44.02777536251875
43.43563728880915 44.88695892714136
43.88082889062643 45.11226186242523
This makes it possible for vm_invoke_block to pass its passed arguments
verbatimly to calling functions. Because they are tail-called the
function calls can be strength-recuced into indirect jumps, which is a
huge win.
These two function were almost identical, except in case of
T_STRING/T_FLOAT. Why not merge them into one, and let the difference be
handled in normal method calls (slowpath). This does not improve
runtime performance for me, but at least reduces for instance rb_eql_opt
from 653 bytes to 86 bytes on my machine, according to nm(1).
After sending SEGV signal, but no response. Try to add 2 more
seconds. If we can not have a detailed log, we need to use
gdb/lldb to show the backtrace.
This commit combines the sweep step with moving objects. With this
commit, we can do:
```ruby
GC.start(compact: true)
```
This code will do the following 3 steps:
1. Fully mark the heap
2. Sweep + Move objects
3. Update references
By default, this will compact in order that heap pages are allocated.
In other words, objects will be packed towards older heap pages (as
opposed to heap pages with more pinned objects like `GC.compact` does).
WSL 2 is officially released. It uses Linux kernel, so almost all specs
for Linux work on WSL, except one: gethostbyaddr. I guess network
resolution in WSL is based on Windows, so the behavior seems a bit
different from normal Linux.
This change adds `platform_is_not :wsl` guard, and guards out the test
in question.