Shapes provides us with an (almost) exact count of instance variables.
We only need to check for Qundef when an IV has been "undefined"
Prefer to use ROBJECT_IV_COUNT when iterating IVs
```
../src/variable.c(1440): warning C4244: 'initializing': conversion from 'double' to 'uint32_t', possible loss of data
242
../src/variable.c(1470): warning C4244: 'initializing': conversion from 'double' to 'uint32_t', possible loss of data
243
```
TODO: check for `newsize` overflow
Prior to this commit, we were reading and writing ivar index and
shape ID in inline caches in two separate instructions when
getting and setting ivars. This meant there was a race condition
with ractors and these caches where one ractor could change
a value in the cache while another was still reading from it.
This commit instead reads and writes shape ID and ivar index to
inline caches atomically so there is no longer a race condition.
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Tabs were expanded because the file did not have any tab indentation in unedited lines.
Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
Object Shapes is used for accessing instance variables and representing the
"frozenness" of objects. Object instances have a "shape" and the shape
represents some attributes of the object (currently which instance variables are
set and the "frozenness"). Shapes form a tree data structure, and when a new
instance variable is set on an object, that object "transitions" to a new shape
in the shape tree. Each shape has an ID that is used for caching. The shape
structure is independent of class, so objects of different types can have the
same shape.
For example:
```ruby
class Foo
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
class Bar
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
foo = Foo.new # `foo` has shape id 2
bar = Bar.new # `bar` has shape id 2
```
Both `foo` and `bar` instances have the same shape because they both set
instance variables of the same name in the same order.
This technique can help to improve inline cache hits as well as generate more
efficient machine code in JIT compilers.
This commit also adds some methods for debugging shapes on objects. See
`RubyVM::Shape` for more details.
For more context on Object Shapes, see [Feature: #18776]
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Eileen M. Uchitelle <eileencodes@gmail.com>
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Tabs were expanded because the file did not have any tab indentation in unedited lines.
Please update your editor config, and use misc/expand_tabs.rb in the pre-commit hook.
Object Shapes is used for accessing instance variables and representing the
"frozenness" of objects. Object instances have a "shape" and the shape
represents some attributes of the object (currently which instance variables are
set and the "frozenness"). Shapes form a tree data structure, and when a new
instance variable is set on an object, that object "transitions" to a new shape
in the shape tree. Each shape has an ID that is used for caching. The shape
structure is independent of class, so objects of different types can have the
same shape.
For example:
```ruby
class Foo
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
class Bar
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
foo = Foo.new # `foo` has shape id 2
bar = Bar.new # `bar` has shape id 2
```
Both `foo` and `bar` instances have the same shape because they both set
instance variables of the same name in the same order.
This technique can help to improve inline cache hits as well as generate more
efficient machine code in JIT compilers.
This commit also adds some methods for debugging shapes on objects. See
`RubyVM::Shape` for more details.
For more context on Object Shapes, see [Feature: #18776]
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Eileen M. Uchitelle <eileencodes@gmail.com>
Co-Authored-By: John Hawthorn <john@hawthorn.email>
This commit implements Objects on Variable Width Allocation. This allows
Objects with more ivars to be embedded (i.e. contents directly follow the
object header) which improves performance through better cache locality.
If an autoload exists for a constant, but the path for the autoload
was required, const_source_location would return [false, 0] instead
of the actual file and line. This fixes it by setting the appropriate
file and line in rb_const_set, and saving the file and line in
const_tbl_update before they get reset by current_autoload_data.
Fixes [Bug #18624]
Object#autoload implements a custom per-thread "mutex" for blocking
threads waiting on autoloading a feature. This causes problems when used
with the fiber scheduler. We swap the implementation to use a Ruby mutex
which is fiber aware.
When calling `const_added` while process in `autoload`, it can
cause synchronization issue because of a thread swithcing.
http://ci.rvm.jp/logfiles/brlog.trunk.20220407-152213#L489
```
1)
Module#autoload (concurrently) raises a LoadError in each thread if the file does not exist ERROR
NameError: uninitialized constant ModuleSpecs::Autoload::FileDoesNotExist
ModuleSpecs::Autoload::FileDoesNotExist
^^^^^^^^^^^^^^^^^^
/tmp/ruby/v3/src/trunk/spec/ruby/core/module/autoload_spec.rb:965:in `block (5 levels) in <top (required)>'
```
This commit reintroduces finer-grained constant cache invalidation.
After 8008fb7 got merged, it was causing issues on token-threaded
builds (such as on Windows).
The issue was that when you're iterating through instruction sequences
and using the translator functions to get back the instruction structs,
you're either using `rb_vm_insn_null_translator` or
`rb_vm_insn_addr2insn2` depending if it's a direct-threading build.
`rb_vm_insn_addr2insn2` does some normalization to always return to
you the non-trace version of whatever instruction you're looking at.
`rb_vm_insn_null_translator` does not do that normalization.
This means that when you're looping through the instructions if you're
trying to do an opcode comparison, it can change depending on the type
of threading that you're using. This can be very confusing. So, this
commit creates a new translator function
`rb_vm_insn_normalizing_translator` to always return the non-trace
version so that opcode comparisons don't have to worry about different
configurations.
[Feature #18589]
This reverts commits for [Feature #18589]:
* 8008fb7352
"Update formatting per feedback"
* 8f6eaca2e1
"Delete ID from constant cache table if it becomes empty on ISEQ free"
* 629908586b
"Finer-grained inline constant cache invalidation"
MSWin builds on AppVeyor have been crashing since the merger.
Current behavior - caches depend on a global counter. All constant mutations cause caches to be invalidated.
```ruby
class A
B = 1
end
def foo
A::B # inline cache depends on global counter
end
foo # populate inline cache
foo # hit inline cache
C = 1 # global counter increments, all caches are invalidated
foo # misses inline cache due to `C = 1`
```
Proposed behavior - caches depend on name components. Only constant mutations with corresponding names will invalidate the cache.
```ruby
class A
B = 1
end
def foo
A::B # inline cache depends constants named "A" and "B"
end
foo # populate inline cache
foo # hit inline cache
C = 1 # caches that depend on the name "C" are invalidated
foo # hits inline cache because IC only depends on "A" and "B"
```
Examples of breaking the new cache:
```ruby
module C
# Breaks `foo` cache because "A" constant is set and the cache in foo depends
# on "A" and "B"
class A; end
end
B = 1
```
We expect the new cache scheme to be invalidated less often because names aren't frequently reused. With the cache being invalidated less, we can rely on its stability more to keep our constant references fast and reduce the need to throw away generated code in YJIT.
[Feature #17881]
Works similarly to `method_added` but for constants.
```ruby
Foo::BAR = 42 # call Foo.const_added(:FOO)
class Foo::Baz; end # call Foo.const_added(:Baz)
Foo.autoload(:Something, "path") # call Foo.const_added(:Something)
```
In an effort to simplify the logic YJIT generates for accessing instance
variable, YJIT ensures that a given name-to-index mapping exists at
compile time. In the case that the mapping doesn't exist, it was created
by using rb_ivar_set() with Qundef on the sample object we see at
compile time. This hack isn't fine if the sample object happens to be
frozen, in which case YJIT would raise a FrozenError unexpectedly.
To deal with this, make a new function that only reserves the mapping
but doesn't touch the object. This is rb_obj_ensure_iv_index_mapping().
This new function superceeds the functionality of rb_iv_index_tbl_lookup()
so it was removed.
Reported by and includes a test case from John Hawthorn <john@hawthorn.email>
Fixes: GH-282
Previously, if an autoload failed (the file was loaded, but the
constant was not defined by the autoloaded file). Ruby will try
to autoload again if you delete the autoloaded file from
$LOADED_FEATURES. With this change, the autoload and the
constant itself are removed as soon as it fails.
To handle cases where multiple threads are autoloading, when
deleting an autoload, handle the case where another thread
already deleted it.
Fixes [Bug #15790]
It's possible for `build_const_pathname` to be called when `rb_cString` is
still NULL. There is a fix-up step when `rb_cString` is initialized, but
it only applies to `fstring` instances.
Before this commit, const_get with inherit=true and constant lookup
expressions searched the ancestors of the starting point in an order
different from `starting_point.ancestors`.
Items in the ancestry list introduced through prepend were searched
after searching the module they were prepended into. This oddity allowed
for situations where constant lookups gave different results even though
`starting_point.ancestors` is the same.
Do the lookup in the same order as `starting_point.ancestors` by
skipping classes and modules that have an origin iclass. The origin
iclass is in the super chain after the prepended modules.
Note that just like before this commit, the starting point of the
constant lookup is always the first item that we search, regardless of
the presence of any prepended modules.
[Bug #17887]
Extracted repeated code as update_classvariable_cache. When cvc
table is not set in getclassvariable, an empty table was created
but it has no id and would cause [BUG], so made the code same as
setclassvariable.
Redo of 34a2acdac788602c14bf05fb616215187badd504 and
931138b00696419945dc03e10f033b1f53cd50f3 which were reverted.
GitHub PR #4340.
This change implements a cache for class variables. Previously there was
no cache for cvars. Cvar access is slow due to needing to travel all the
way up th ancestor tree before returning the cvar value. The deeper the
ancestor tree the slower cvar access will be.
The benefits of the cache are more visible with a higher number of
included modules due to the way Ruby looks up class variables. The
benchmark here includes 26 modules and shows with the cache, this branch
is 6.5x faster when accessing class variables.
```
compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19]
built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19]
| |compare-ruby|built-ruby|
|:--------|-----------:|---------:|
|vm_cvar | 5.681M| 36.980M|
| | -| 6.51x|
```
Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails
application. ActiveRecord::Base.logger has 71 ancestors. The more
ancestors a tree has, the more clear the speed increase. IE if Base had
only one ancestor we'd see no improvement. This benchmark is run on a
vanilla Rails application.
Benchmark code:
```ruby
require "benchmark/ips"
require_relative "config/environment"
Benchmark.ips do |x|
x.report "logger" do
ActiveRecord::Base.logger
end
end
```
Ruby 3.0 master / Rails 6.1:
```
Warming up --------------------------------------
logger 155.251k i/100ms
Calculating -------------------------------------
```
Ruby 3.0 with cvar cache / Rails 6.1:
```
Warming up --------------------------------------
logger 1.546M i/100ms
Calculating -------------------------------------
logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s
```
Lastly we ran a benchmark to demonstate the difference between master
and our cache when the number of modules increases. This benchmark
measures 1 ancestor, 30 ancestors, and 100 ancestors.
Ruby 3.0 master:
```
Warming up --------------------------------------
1 module 1.231M i/100ms
30 modules 432.020k i/100ms
100 modules 145.399k i/100ms
Calculating -------------------------------------
1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s
30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s
100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s
Comparison:
1 module: 12209958.3 i/s
30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower
100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower
```
Ruby 3.0 with cvar cache:
```
Warming up --------------------------------------
1 module 1.641M i/100ms
30 modules 1.655M i/100ms
100 modules 1.620M i/100ms
Calculating -------------------------------------
1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s
30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s
100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s
Comparison:
1 module: 16279458.0 i/s
100 modules: 16087484.6 i/s - same-ish: difference falls within error
30 modules: 15891406.2 i/s - same-ish: difference falls within error
```
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>