The (sole) use of memcpy in our public header is now replaced to
directly call ruby_nonempty_memcpy, and the previous definition of
memcpy is now internal-only. [Bug#18893]
Former ROBJECT_IV_INDEX_TBL macro included RCLASS_IV_INDEX_TBL, which is
not disclosed to extension libraies. The macro was kind of broken. Why
not just deprecate it, and convert the internal use into an inline
function.
RARRAY_AREF has been a macro for reasons. We might not be able to
change that for public APIs, but why not relax the situation internally
to make it an inline function.
According to MSVC manual (*1), cl.exe can skip including a header file
when that:
- contains #pragma once, or
- starts with #ifndef, or
- starts with #if ! defined.
GCC has a similar trick (*2), but it acts more stricter (e. g. there
must be _no tokens_ outside of #ifndef...#endif).
Sun C lacked #pragma once for a looong time. Oracle Developer Studio
12.5 finally implemented it, but we cannot assume such recent version.
This changeset modifies header files so that each of them include
strictly one #ifndef...#endif. I believe this is the most portable way
to trigger compiler optimizations. [Bug #16770]
*1: https://docs.microsoft.com/en-us/cpp/preprocessor/once
*2: https://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros.html
Saves comitters' daily life by avoid #include-ing everything from
internal.h to make each file do so instead. This would significantly
speed up incremental builds.
We take the following inclusion order in this changeset:
1. "ruby/config.h", where _GNU_SOURCE is defined (must be the very
first thing among everything).
2. RUBY_EXTCONF_H if any.
3. Standard C headers, sorted alphabetically.
4. Other system headers, maybe guarded by #ifdef
5. Everything else, sorted alphabetically.
Exceptions are those win32-related headers, which tend not be self-
containing (headers have inclusion order dependencies).
Now that we no longer support old compilers, we can safely delete
several obsolete #ifdef gurads. Also because (as of writing) it is
impossible to compile the program using C++ compilers, lets just
entirely prohibit __cplusplus to reduce # of LOCs.
Note however that we still cannot eliminate __STDC_VERSION__ checks,
because MSVC does not define it, saying its C99 support is partial.
See also https://social.msdn.microsoft.com/Forums/vstudio/en-US/53a4fd75-9f97-48b2-aa63-2e2e5a15efa3
This is tentative. For the sake of simplicity we partially revert
commits e9cb552ec9, ee85a6e72b and 51edb30042. Will decouple them
once again when we are ready.
One day, I could not resist the way it was written. I finally started
to make the code clean. This changeset is the beginning of a series of
housekeeping commits. It is a simple refactoring; split internal.h into
files, so that we can divide and concur in the upcoming commits. No
lines of codes are either added or removed, except the obvious file
headers/footers. The generated binary is identical to the one before.
Methods and their definitions can be allocated/deallocated on-the-fly.
One pathological situation is when a method is deallocated then another
one is allocated immediately after that. Address of those old/new method
entries/definitions can be the same then, depending on underlying
malloc/free implementation.
So pointer comparison is insufficient. We have to check the contents.
To do so we introduce def->method_serial, which is an integer unique to
that specific method definition.
PS: Note that method_serial being uintptr_t rather than rb_serial_t is
intentional. This is because rb_serial_t can be bigger than a pointer
on a 32bit system (rb_serial_t is at least 64bit). In order to preserve
old packing of struct rb_call_cache, rb_serial_t is inappropriate.
Previously every time a method was defined on a module, we would
recursively walk all subclasses to see if the module was included in a
class which the VM optimizes for (such as Integer#+).
For most method definitions we can tell immediately that this won't be
the case based on the method's name. To do this we just keep a hash with
method IDs of optimized methods and if our new method isn't in that list
we don't need to check subclasses at all.
This makes behavior the same as super in instance_eval in method
in class. The reason this wasn't implemented before is that
there is a check to determine if the self in the current context
is of the expected class, and a module itself can be included
in multiple classes, so it doesn't have an expected class.
Implementing this requires giving iclasses knowledge of which
class created them, so that super call in the module method
knows the expected class for super calls. This reference
is called includer, and should only be set for iclasses.
Note that the approach Ruby uses in this check is not robust. If
you instance_eval another object of the same class and call super,
instead of an TypeError, you get super called with the
instance_eval receiver instead of the method receiver. Truly
fixing super would require keeping a reference to the super object
(method receiver) in each frame where scope has changed, and using
that instead of current self when calling super.
Fixes [Bug #11636]
After the previous commit, this was still broken. The reason it
was broken is that a refined module that hasn't been prepended to
yet keeps the refined methods in the module's method table. When
prepending, the module's method table is moved to the origin
iclass, and then the refined methods are moved from the method
table to a new method table in the module itself.
Unfortunately, that means that if a class has included the module,
prepending breaks the refinements, because when the methods are
moved from the origin iclass method table to the module method
table, they are removed from the method table from the iclass
created when the module was included earlier.
Fix this by always creating an origin class when including a
module that has any refinements, even if the refinements are
not currently used. I wasn't sure the best way to do that.
The approach I choose was to use an object flag. The flag is
set on the module when Module#refine is called, and if the
flag is present when the module is included in another module
or class, an origin iclass is created for the module.
Fixes [Bug #13446]
Decades ago, among all the data that a class has, its method
table was no doubt the most frequently accessed data. Previous
data structures were based on that assumption.
Today that is no longer true. The most frequently accessed field
moved to class_serial. That field is not always as wide as VALUE
but if it is, let us swap m_tbl and class_serial.
Calculating -------------------------------------
ours trunk
Optcarrot Lan_Master.nes 47.363 46.630 fps
Comparison:
Optcarrot Lan_Master.nes
ours: 47.4 fps
trunk: 46.6 fps - 1.02x slower
These functions are used from within a compilation unit so we can
make them static, for better binary size. This changeset reduces
the size of generated ruby binary from 26,590,128 bytes to
26,584,472 bytes on my macihne.
This removes the security features added by $SAFE = 1, and warns for access
or modification of $SAFE from Ruby-level, as well as warning when calling
all public C functions related to $SAFE.
This modifies some internal functions that took a safe level argument
to no longer take the argument.
rb_require_safe now warns, rb_require_string has been added as a
version that takes a VALUE and does not warn.
One public C function that still takes a safe level argument and that
this doesn't warn for is rb_eval_cmd. We may want to consider
adding an alternative method that does not take a safe level argument,
and warn for rb_eval_cmd.
Looking at the list of symbols inside of libruby-static.a, I found
hundreds of functions that are defined, but used from nowhere.
There can be reasons for each of them (e.g. some functions are
specific to some platform, some are useful when debugging, etc).
However it seems the functions deleted here exist for no reason.
This changeset reduces the size of ruby binary from 26,671,456
bytes to 26,592,864 bytes on my machine.
Prior to this changeset, majority of inline cache mishits resulted
into the same method entry when rb_callable_method_entry() resolves
a method search. Let's not call the function at the first place on
such situations.
In doing so we extend the struct rb_call_cache from 44 bytes (in
case of 64 bit machine) to 64 bytes, and fill the gap with
secondary class serial(s). Call cache's class serials now behavies
as a LRU cache.
Calculating -------------------------------------
ours 2.7 2.6
vm2_poly_same_method 2.339M 1.744M 1.369M i/s - 6.000M times in 2.565086s 3.441329s 4.381386s
Comparison:
vm2_poly_same_method
ours: 2339103.0 i/s
2.7: 1743512.3 i/s - 1.34x slower
2.6: 1369429.8 i/s - 1.71x slower
ISO/IEC 9899:1999 section 6.7.8 specifies the values of static
storage which are not explicitly initialized. According to that
these initializers can be omitted. Doing so improvoes future
compatibility against addition / deletion of the fields of this
struct.
Noticed that rb_method_basic_definition_p is frequently called.
Its callers include vm_caller_setup_args_block(),
rb_hash_default_value(), rb_num_neative_int_p(), and a lot more.
It seems worth caching the method resolution part. Majority of
rb_method_basic_definion_p() usages take fixed class and fixed
method id combinations.
Calculating -------------------------------------
ours trunk
so_matrix 2.379 2.115 i/s - 1.000 times in 0.420409s 0.472879s
Comparison:
so_matrix
ours: 2.4 i/s
trunk: 2.1 i/s - 1.12x slower
Previously this was restricted to only gcc because of the
GCC_VERSION_SINCE check (which explicitly excludes clang).
GCC 3.3.0 is quite old so I feel relatively safe assuming that all
reasonable versions of clang support this.
Requested by ko1 that ability of calling rb_raise from anywhere
outside of GVL is "too much". Give up that part, move the GVL
aquisition routine into gc.c, and make our new gc_raise().