Before this commit, Kernel#lambda can't tell the difference between a
directly passed literal block and one passed with an ampersand.
A block passed with an ampersand is semantically speaking already a
non-lambda proc. When Kernel#lambda receives a non-lambda proc, it
should simply return it.
Implementation wise, when the VM calls a method with a literal block, it
places the code for the block on the calling control frame and passes a
pointer (block handler) to the callee. Before this commit, the VM
forwards block arguments by simply forwarding the block handler, which
leaves the slot for block code unused when a control frame forwards its
block argument. I use the vacant space to indicate that a frame has
forwarded its block argument and inspect that in Kernel#lambda to detect
forwarded blocks.
This is a very ad-hoc solution and relies *heavily* on the way block
passing works in the VM. However, it's the most self-contained solution
I have.
[Bug #15620]
(old)
test.rb:4: warning: The last argument is used as the keyword parameter
test.rb:1: warning: for `foo' defined here; maybe ** should be added to the call?
(new)
test.rb:4: warning: The last argument is used as keyword parameters; maybe ** should be added to the call
test.rb:1: warning: The called method `foo' is defined here
I am trying to fix this error:
http://ci.rvm.jp/results/trunk-gc_compact@silicon-docker/2491596
Somehow we have a page in the `free_pages` list that is full. This
commit refactors the code so that any time we add a page to the
`free_pages` list, we do it via `heap_add_freepage`. That function then
asserts that the free slots on that page are not 0.
Methods and their definitions can be allocated/deallocated on-the-fly.
One pathological situation is when a method is deallocated then another
one is allocated immediately after that. Address of those old/new method
entries/definitions can be the same then, depending on underlying
malloc/free implementation.
So pointer comparison is insufficient. We have to check the contents.
To do so we introduce def->method_serial, which is an integer unique to
that specific method definition.
PS: Note that method_serial being uintptr_t rather than rb_serial_t is
intentional. This is because rb_serial_t can be bigger than a pointer
on a 32bit system (rb_serial_t is at least 64bit). In order to preserve
old packing of struct rb_call_cache, rb_serial_t is inappropriate.
* Default VMIN and VTIME to minimum input.
* Disable parity check bits explicitly.
* Disable all bits for flow control on input.
Co-Authored-By: NARUSE, Yui <naruse@airemix.jp>
https://github.com/ruby/io-console/commit/5ce201a686
Previously every time a method was defined on a module, we would
recursively walk all subclasses to see if the module was included in a
class which the VM optimizes for (such as Integer#+).
For most method definitions we can tell immediately that this won't be
the case based on the method's name. To do this we just keep a hash with
method IDs of optimized methods and if our new method isn't in that list
we don't need to check subclasses at all.
We know that this is a heap-allocated object (a CLASS, MODULE, or
ICLASS) so we don't need to check if it is an immediate value. This
should be very slightly faster.
Avoids genereating a "throwaway" sentinel class serial. There wasn't any
read harm in doing so (we're at no risk of exhaustion and there'd be no
measurable performance impact), but if feels cleaner that all class
serials actually end up assigned and used (especially now that we won't
overwrite them in a single method definition).
rb_clear_method_cache_by_class calls rb_class_clear_method_cache
recursively on subclasses, where it will bump the class serial and clear
some other data (callable_m_tbl, and some mjit data).
Previously this could end up taking a long time to clear all the classes
if the module was included a few levels deep and especially if there
were multiple paths to it in the dependency tree (ie. a class includes
two modules which both include the same other module) as we end up
revisiting class/iclass/module objects multiple times.
This commit avoids revisiting the same object, by short circuiting when
revisit the same object. We can check this efficiently by comparing the
class serial of each object we visit with the next class serial at the
start. We know that any objects with a higher class serial have already
been visited.