According to MSVC manual (*1), cl.exe can skip including a header file
when that:
- contains #pragma once, or
- starts with #ifndef, or
- starts with #if ! defined.
GCC has a similar trick (*2), but it acts more stricter (e. g. there
must be _no tokens_ outside of #ifndef...#endif).
Sun C lacked #pragma once for a looong time. Oracle Developer Studio
12.5 finally implemented it, but we cannot assume such recent version.
This changeset modifies header files so that each of them include
strictly one #ifndef...#endif. I believe this is the most portable way
to trigger compiler optimizations. [Bug #16770]
*1: https://docs.microsoft.com/en-us/cpp/preprocessor/once
*2: https://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros.html
Saves comitters' daily life by avoid #include-ing everything from
internal.h to make each file do so instead. This would significantly
speed up incremental builds.
We take the following inclusion order in this changeset:
1. "ruby/config.h", where _GNU_SOURCE is defined (must be the very
first thing among everything).
2. RUBY_EXTCONF_H if any.
3. Standard C headers, sorted alphabetically.
4. Other system headers, maybe guarded by #ifdef
5. Everything else, sorted alphabetically.
Exceptions are those win32-related headers, which tend not be self-
containing (headers have inclusion order dependencies).
Now that we no longer support old compilers, we can safely delete
several obsolete #ifdef gurads. Also because (as of writing) it is
impossible to compile the program using C++ compilers, lets just
entirely prohibit __cplusplus to reduce # of LOCs.
Note however that we still cannot eliminate __STDC_VERSION__ checks,
because MSVC does not define it, saying its C99 support is partial.
See also https://social.msdn.microsoft.com/Forums/vstudio/en-US/53a4fd75-9f97-48b2-aa63-2e2e5a15efa3
This is tentative. For the sake of simplicity we partially revert
commits e9cb552ec9, ee85a6e72b and 51edb30042. Will decouple them
once again when we are ready.
One day, I could not resist the way it was written. I finally started
to make the code clean. This changeset is the beginning of a series of
housekeeping commits. It is a simple refactoring; split internal.h into
files, so that we can divide and concur in the upcoming commits. No
lines of codes are either added or removed, except the obvious file
headers/footers. The generated binary is identical to the one before.
Methods and their definitions can be allocated/deallocated on-the-fly.
One pathological situation is when a method is deallocated then another
one is allocated immediately after that. Address of those old/new method
entries/definitions can be the same then, depending on underlying
malloc/free implementation.
So pointer comparison is insufficient. We have to check the contents.
To do so we introduce def->method_serial, which is an integer unique to
that specific method definition.
PS: Note that method_serial being uintptr_t rather than rb_serial_t is
intentional. This is because rb_serial_t can be bigger than a pointer
on a 32bit system (rb_serial_t is at least 64bit). In order to preserve
old packing of struct rb_call_cache, rb_serial_t is inappropriate.
Previously every time a method was defined on a module, we would
recursively walk all subclasses to see if the module was included in a
class which the VM optimizes for (such as Integer#+).
For most method definitions we can tell immediately that this won't be
the case based on the method's name. To do this we just keep a hash with
method IDs of optimized methods and if our new method isn't in that list
we don't need to check subclasses at all.
This makes behavior the same as super in instance_eval in method
in class. The reason this wasn't implemented before is that
there is a check to determine if the self in the current context
is of the expected class, and a module itself can be included
in multiple classes, so it doesn't have an expected class.
Implementing this requires giving iclasses knowledge of which
class created them, so that super call in the module method
knows the expected class for super calls. This reference
is called includer, and should only be set for iclasses.
Note that the approach Ruby uses in this check is not robust. If
you instance_eval another object of the same class and call super,
instead of an TypeError, you get super called with the
instance_eval receiver instead of the method receiver. Truly
fixing super would require keeping a reference to the super object
(method receiver) in each frame where scope has changed, and using
that instead of current self when calling super.
Fixes [Bug #11636]
After the previous commit, this was still broken. The reason it
was broken is that a refined module that hasn't been prepended to
yet keeps the refined methods in the module's method table. When
prepending, the module's method table is moved to the origin
iclass, and then the refined methods are moved from the method
table to a new method table in the module itself.
Unfortunately, that means that if a class has included the module,
prepending breaks the refinements, because when the methods are
moved from the origin iclass method table to the module method
table, they are removed from the method table from the iclass
created when the module was included earlier.
Fix this by always creating an origin class when including a
module that has any refinements, even if the refinements are
not currently used. I wasn't sure the best way to do that.
The approach I choose was to use an object flag. The flag is
set on the module when Module#refine is called, and if the
flag is present when the module is included in another module
or class, an origin iclass is created for the module.
Fixes [Bug #13446]
Decades ago, among all the data that a class has, its method
table was no doubt the most frequently accessed data. Previous
data structures were based on that assumption.
Today that is no longer true. The most frequently accessed field
moved to class_serial. That field is not always as wide as VALUE
but if it is, let us swap m_tbl and class_serial.
Calculating -------------------------------------
ours trunk
Optcarrot Lan_Master.nes 47.363 46.630 fps
Comparison:
Optcarrot Lan_Master.nes
ours: 47.4 fps
trunk: 46.6 fps - 1.02x slower
These functions are used from within a compilation unit so we can
make them static, for better binary size. This changeset reduces
the size of generated ruby binary from 26,590,128 bytes to
26,584,472 bytes on my macihne.
This removes the security features added by $SAFE = 1, and warns for access
or modification of $SAFE from Ruby-level, as well as warning when calling
all public C functions related to $SAFE.
This modifies some internal functions that took a safe level argument
to no longer take the argument.
rb_require_safe now warns, rb_require_string has been added as a
version that takes a VALUE and does not warn.
One public C function that still takes a safe level argument and that
this doesn't warn for is rb_eval_cmd. We may want to consider
adding an alternative method that does not take a safe level argument,
and warn for rb_eval_cmd.
Looking at the list of symbols inside of libruby-static.a, I found
hundreds of functions that are defined, but used from nowhere.
There can be reasons for each of them (e.g. some functions are
specific to some platform, some are useful when debugging, etc).
However it seems the functions deleted here exist for no reason.
This changeset reduces the size of ruby binary from 26,671,456
bytes to 26,592,864 bytes on my machine.
Prior to this changeset, majority of inline cache mishits resulted
into the same method entry when rb_callable_method_entry() resolves
a method search. Let's not call the function at the first place on
such situations.
In doing so we extend the struct rb_call_cache from 44 bytes (in
case of 64 bit machine) to 64 bytes, and fill the gap with
secondary class serial(s). Call cache's class serials now behavies
as a LRU cache.
Calculating -------------------------------------
ours 2.7 2.6
vm2_poly_same_method 2.339M 1.744M 1.369M i/s - 6.000M times in 2.565086s 3.441329s 4.381386s
Comparison:
vm2_poly_same_method
ours: 2339103.0 i/s
2.7: 1743512.3 i/s - 1.34x slower
2.6: 1369429.8 i/s - 1.71x slower
ISO/IEC 9899:1999 section 6.7.8 specifies the values of static
storage which are not explicitly initialized. According to that
these initializers can be omitted. Doing so improvoes future
compatibility against addition / deletion of the fields of this
struct.
Noticed that rb_method_basic_definition_p is frequently called.
Its callers include vm_caller_setup_args_block(),
rb_hash_default_value(), rb_num_neative_int_p(), and a lot more.
It seems worth caching the method resolution part. Majority of
rb_method_basic_definion_p() usages take fixed class and fixed
method id combinations.
Calculating -------------------------------------
ours trunk
so_matrix 2.379 2.115 i/s - 1.000 times in 0.420409s 0.472879s
Comparison:
so_matrix
ours: 2.4 i/s
trunk: 2.1 i/s - 1.12x slower
Previously this was restricted to only gcc because of the
GCC_VERSION_SINCE check (which explicitly excludes clang).
GCC 3.3.0 is quite old so I feel relatively safe assuming that all
reasonable versions of clang support this.
Requested by ko1 that ability of calling rb_raise from anywhere
outside of GVL is "too much". Give up that part, move the GVL
aquisition routine into gc.c, and make our new gc_raise().
This changeset basically replaces `ruby_xmalloc(x * y)` into
`ruby_xmalloc2(x, y)`. Some convenient functions are also
provided for instance `rb_xmalloc_mul_add(x, y, z)` which allocates
x * y + z byes.
This function has been used wrongly always at first, "allocate a
buffer then wrap it with tmpbuf". This order can cause a memory
leak, as tmpbuf creation also can raise a NoMemoryError exception.
The right order is "create a tmpbuf then allocate&wrap a buffer".
So the argument of this function is rather harmful than just
useless.
TODO:
* Rename this function to more proper name, as it is not used
"temporary" (function local) purpose.
* Allocate and wrap at once safely, like `ALLOCV`.
The parser needs to determine whether a local varaiable is defined or
not in outer scope. For the sake, "base_block" field has kept the outer
block.
However, the whole block was actually unneeded; the parser used only
base_block->iseq.
So, this change lets parser_params have the iseq directly, instead of
the whole block.
This reverts commits: 10d6a3aca78ba48c1b85fba8627dc1dd883de5ba6c6a25feca167e6b48f17cb96d41a53207979278595b3c4fdd1521f7cf89c11c5e69accf336082033632a812c0f56506be0d86427a3219 .
The reason for the revert is that we observe ABA problem around
inline method cache. When a cache misshits, we search for a
method entry. And if the entry is identical to what was cached
before, we reuse the cache. But the commits we are reverting here
introduced situations where a method entry is freed, then the
identical memory region is used for another method entry. An
inline method cache cannot detect that ABA.
Here is a code that reproduce such situation:
```ruby
require 'prime'
class << Integer
alias org_sqrt sqrt
def sqrt(n)
raise
end
GC.stress = true
Prime.each(7*37){} rescue nil # <- Here we populate CC
class << Object.new; end
# These adjacent remove-then-alias maneuver
# frees a method entry, then immediately
# reuses it for another.
remove_method :sqrt
alias sqrt org_sqrt
end
Prime.each(7*37).to_a # <- SEGV
```
At last, not only myself but also your compiler are fully confident
that the method entries pointed from call caches are immutable. We
don't have to worry about silent updates. Just delete the branch
that is now always false.
Calculating -------------------------------------
ours trunk
vm2_poly_same_method 2.142M 2.070M i/s - 6.000M times in 2.801148s 2.898994s
Comparison:
vm2_poly_same_method
ours: 2141979.2 i/s
trunk: 2069683.8 i/s - 1.03x slower
This approach uses a flag bit on the final hash object in the regular splat,
as opposed to a previous approach that used a VM frame flag. The hash flag
approach is less invasive, and handles some cases that the VM frame flag
approach does not, such as saving the argument splat array and splatting it
later:
ruby2_keywords def foo(*args)
@args = args
bar
end
def bar
baz(*@args)
end
def baz(*args, **kw)
[args, kw]
end
foo(a:1) #=> [[], {a: 1}]
foo({a: 1}, **{}) #=> [[{a: 1}], {}]
foo({a: 1}) #=> 2.7: [[], {a: 1}] # and warning
foo({a: 1}) #=> 3.0: [[{a: 1}], {}]
It doesn't handle some cases that the VM frame flag handles, such as when
the final hash object is replaced using Hash#merge, but those cases are
probably less common and are unlikely to properly support keyword
argument separation.
Use ruby2_keywords to handle argument delegation in the delegate library.
Cfuncs that use rb_scan_args with the : entry suffer similar keyword
argument separation issues that Ruby methods suffer if the cfuncs
accept optional or variable arguments.
This makes the following changes to : handling.
* Treats as **kw, prompting keyword argument separation warnings
if called with a positional hash.
* Do not look for an option hash if empty keywords are provided.
For backwards compatibility, treat an empty keyword splat as a empty
mandatory positional hash argument, but emit a a warning, as this
behavior will be removed in Ruby 3. The argument number check
needs to be moved lower so it can correctly handle an empty
positional argument being added.
* If the last argument is nil and it is necessary to treat it as an option
hash in order to make sure all arguments are processed, continue to
treat the last argument as the option hash. Emit a warning in this case,
as this behavior will be removed in Ruby 3.
* If splitting the keyword hash into two hashes, issue a warning, as we
will not be splitting hashes in Ruby 3.
* If the keyword argument is required to fill a mandatory positional
argument, continue to do so, but emit a warning as this behavior will
be going away in Ruby 3.
* If keyword arguments are provided and the last argument is not a hash,
that indicates something wrong. This can happen if a cfunc is calling
rb_scan_args multiple times, and providing arguments that were not
passed to it from Ruby. Callers need to switch to the new
rb_scan_args_kw function, which allows passing of whether keywords
were provided.
This commit fixes all warnings caused by the changes above.
It switches some function calls to *_kw versions with appropriate
kw_splat flags. If delegating arguments, RB_PASS_CALLED_KEYWORDS
is used. If creating new arguments, RB_PASS_KEYWORDS is used if
the last argument is a hash to be treated as keywords.
In open_key_args in io.c, use rb_scan_args_kw.
In this case, the arguments provided come from another C
function, not Ruby. The last argument may or may not be a hash,
so we can't set keyword argument mode. However, if it is a
hash, we don't want to warn when treating it as keywords.
In Ruby files, make sure to appropriately use keyword splats
or literal keywords when calling Cfuncs that now issue keyword
argument separation warnings through rb_scan_args. Also, make
sure not to pass nil in place of an option hash.
Work around Kernel#warn warnings due to problems in the Rubygems
override of the method. There is an open pull request to fix
these issues in Rubygems, but part of the Rubygems tests for
their override fail on ruby-head due to rb_scan_args not
recognizing empty keyword splats, which this commit fixes.
Implementation wise, adding rb_scan_args_kw is kind of a pain,
because rb_scan_args takes a variable number of arguments.
In order to not duplicate all the code, the function internals need
to be split into two functions taking a va_list, and to avoid passing
in a ton of arguments, a single struct argument is used to handle
the variables previously local to the function.