ruby/vm_callinfo.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

575 строки
17 KiB
C
Исходник Обычный вид История

#ifndef RUBY_VM_CALLINFO_H /*-*-C-*-vi:se ft=c:*/
#define RUBY_VM_CALLINFO_H
/**
* @author Ruby developers <ruby-core@ruby-lang.org>
* @copyright This file is a part of the programming language Ruby.
* Permission is hereby granted, to either redistribute and/or
* modify this file, provided that the conditions mentioned in the
* file COPYING are met. Consult the file for details.
*/
#include "debug_counter.h"
#include "internal/class.h"
#include "shape.h"
enum vm_call_flag_bits {
2023-04-01 20:17:57 +03:00
VM_CALL_ARGS_SPLAT_bit, // m(*args)
VM_CALL_ARGS_BLOCKARG_bit, // m(&block)
VM_CALL_FCALL_bit, // m(args) # receiver is self
VM_CALL_VCALL_bit, // m # method call that looks like a local variable
VM_CALL_ARGS_SIMPLE_bit, // (ci->flag & (SPLAT|BLOCKARG)) && blockiseq == NULL && ci->kw_arg == NULL
VM_CALL_KWARG_bit, // has kwarg
VM_CALL_KW_SPLAT_bit, // m(**opts)
VM_CALL_TAILCALL_bit, // located at tail position
VM_CALL_SUPER_bit, // super
VM_CALL_ZSUPER_bit, // zsuper
VM_CALL_OPT_SEND_bit, // internal flag
VM_CALL_KW_SPLAT_MUT_bit, // kw splat hash can be modified (to avoid allocating a new one)
VM_CALL__END
};
#define VM_CALL_ARGS_SPLAT (0x01 << VM_CALL_ARGS_SPLAT_bit)
#define VM_CALL_ARGS_BLOCKARG (0x01 << VM_CALL_ARGS_BLOCKARG_bit)
#define VM_CALL_FCALL (0x01 << VM_CALL_FCALL_bit)
#define VM_CALL_VCALL (0x01 << VM_CALL_VCALL_bit)
#define VM_CALL_ARGS_SIMPLE (0x01 << VM_CALL_ARGS_SIMPLE_bit)
#define VM_CALL_KWARG (0x01 << VM_CALL_KWARG_bit)
#define VM_CALL_KW_SPLAT (0x01 << VM_CALL_KW_SPLAT_bit)
#define VM_CALL_TAILCALL (0x01 << VM_CALL_TAILCALL_bit)
#define VM_CALL_SUPER (0x01 << VM_CALL_SUPER_bit)
#define VM_CALL_ZSUPER (0x01 << VM_CALL_ZSUPER_bit)
#define VM_CALL_OPT_SEND (0x01 << VM_CALL_OPT_SEND_bit)
Reduce allocations for keyword argument hashes Previously, passing a keyword splat to a method always allocated a hash on the caller side, and accepting arbitrary keywords in a method allocated a separate hash on the callee side. Passing explicit keywords to a method that accepted a keyword splat did not allocate a hash on the caller side, but resulted in two hashes allocated on the callee side. This commit makes passing a single keyword splat to a method not allocate a hash on the caller side. Passing multiple keyword splats or a mix of explicit keywords and a keyword splat still generates a hash on the caller side. On the callee side, if arbitrary keywords are not accepted, it does not allocate a hash. If arbitrary keywords are accepted, it will allocate a hash, but this commit uses a callinfo flag to indicate whether the caller already allocated a hash, and if so, the callee can use the passed hash without duplicating it. So this commit should make it so that a maximum of a single hash is allocated during method calls. To set the callinfo flag appropriately, method call argument compilation checks if only a single keyword splat is given. If only one keyword splat is given, the VM_CALL_KW_SPLAT_MUT callinfo flag is not set, since in that case the keyword splat is passed directly and not mutable. If more than one splat is used, a new hash needs to be generated on the caller side, and in that case the callinfo flag is set, indicating the keyword splat is mutable by the callee. In compile_hash, used for both hash and keyword argument compilation, if compiling keyword arguments and only a single keyword splat is used, pass the argument directly. On the caller side, in vm_args.c, the callinfo flag needs to be recognized and handled. Because the keyword splat argument may not be a hash, it needs to be converted to a hash first if not. Then, unless the callinfo flag is set, the hash needs to be duplicated. The temporary copy of the callinfo flag, kw_flag, is updated if a hash was duplicated, to prevent the need to duplicate it again. If we are converting to a hash or duplicating a hash, we need to update the argument array, which can including duplicating the positional splat array if one was passed. CALLER_SETUP_ARG and a couple other places needs to be modified to handle similar issues for other types of calls. This includes fairly comprehensive tests for different ways keywords are handled internally, checking that you get equal results but that keyword splats on the caller side result in distinct objects for keyword rest parameters. Included are benchmarks for keyword argument calls. Brief results when compiled without optimization: def kw(a: 1) a end def kws(**kw) kw end h = {a: 1} kw(a: 1) # about same kw(**h) # 2.37x faster kws(a: 1) # 1.30x faster kws(**h) # 2.19x faster kw(a: 1, **h) # 1.03x slower kw(**h, **h) # about same kws(a: 1, **h) # 1.16x faster kws(**h, **h) # 1.14x faster
2020-02-24 23:05:07 +03:00
#define VM_CALL_KW_SPLAT_MUT (0x01 << VM_CALL_KW_SPLAT_MUT_bit)
struct rb_callinfo_kwarg {
int keyword_len;
VALUE keywords[];
};
static inline size_t
rb_callinfo_kwarg_bytes(int keyword_len)
{
return rb_size_mul_add_or_raise(
keyword_len,
sizeof(VALUE),
sizeof(struct rb_callinfo_kwarg),
rb_eRuntimeError);
}
// imemo_callinfo
struct rb_callinfo {
VALUE flags;
const struct rb_callinfo_kwarg *kwarg;
VALUE mid;
VALUE flag;
VALUE argc;
};
#ifndef USE_EMBED_CI
#define USE_EMBED_CI 1
#endif
#if SIZEOF_VALUE == 8
#define CI_EMBED_TAG_bits 1
#define CI_EMBED_ARGC_bits 15
#define CI_EMBED_FLAG_bits 16
#define CI_EMBED_ID_bits 32
#elif SIZEOF_VALUE == 4
#define CI_EMBED_TAG_bits 1
Reduce allocations for keyword argument hashes Previously, passing a keyword splat to a method always allocated a hash on the caller side, and accepting arbitrary keywords in a method allocated a separate hash on the callee side. Passing explicit keywords to a method that accepted a keyword splat did not allocate a hash on the caller side, but resulted in two hashes allocated on the callee side. This commit makes passing a single keyword splat to a method not allocate a hash on the caller side. Passing multiple keyword splats or a mix of explicit keywords and a keyword splat still generates a hash on the caller side. On the callee side, if arbitrary keywords are not accepted, it does not allocate a hash. If arbitrary keywords are accepted, it will allocate a hash, but this commit uses a callinfo flag to indicate whether the caller already allocated a hash, and if so, the callee can use the passed hash without duplicating it. So this commit should make it so that a maximum of a single hash is allocated during method calls. To set the callinfo flag appropriately, method call argument compilation checks if only a single keyword splat is given. If only one keyword splat is given, the VM_CALL_KW_SPLAT_MUT callinfo flag is not set, since in that case the keyword splat is passed directly and not mutable. If more than one splat is used, a new hash needs to be generated on the caller side, and in that case the callinfo flag is set, indicating the keyword splat is mutable by the callee. In compile_hash, used for both hash and keyword argument compilation, if compiling keyword arguments and only a single keyword splat is used, pass the argument directly. On the caller side, in vm_args.c, the callinfo flag needs to be recognized and handled. Because the keyword splat argument may not be a hash, it needs to be converted to a hash first if not. Then, unless the callinfo flag is set, the hash needs to be duplicated. The temporary copy of the callinfo flag, kw_flag, is updated if a hash was duplicated, to prevent the need to duplicate it again. If we are converting to a hash or duplicating a hash, we need to update the argument array, which can including duplicating the positional splat array if one was passed. CALLER_SETUP_ARG and a couple other places needs to be modified to handle similar issues for other types of calls. This includes fairly comprehensive tests for different ways keywords are handled internally, checking that you get equal results but that keyword splats on the caller side result in distinct objects for keyword rest parameters. Included are benchmarks for keyword argument calls. Brief results when compiled without optimization: def kw(a: 1) a end def kws(**kw) kw end h = {a: 1} kw(a: 1) # about same kw(**h) # 2.37x faster kws(a: 1) # 1.30x faster kws(**h) # 2.19x faster kw(a: 1, **h) # 1.03x slower kw(**h, **h) # about same kws(a: 1, **h) # 1.16x faster kws(**h, **h) # 1.14x faster
2020-02-24 23:05:07 +03:00
#define CI_EMBED_ARGC_bits 3
#define CI_EMBED_FLAG_bits 13
#define CI_EMBED_ID_bits 15
#endif
#if (CI_EMBED_TAG_bits + CI_EMBED_ARGC_bits + CI_EMBED_FLAG_bits + CI_EMBED_ID_bits) != (SIZEOF_VALUE * 8)
#error
#endif
#define CI_EMBED_FLAG 0x01
#define CI_EMBED_ARGC_SHFT (CI_EMBED_TAG_bits)
#define CI_EMBED_ARGC_MASK ((((VALUE)1)<<CI_EMBED_ARGC_bits) - 1)
#define CI_EMBED_FLAG_SHFT (CI_EMBED_TAG_bits + CI_EMBED_ARGC_bits)
#define CI_EMBED_FLAG_MASK ((((VALUE)1)<<CI_EMBED_FLAG_bits) - 1)
#define CI_EMBED_ID_SHFT (CI_EMBED_TAG_bits + CI_EMBED_ARGC_bits + CI_EMBED_FLAG_bits)
#define CI_EMBED_ID_MASK ((((VALUE)1)<<CI_EMBED_ID_bits) - 1)
static inline bool
vm_ci_packed_p(const struct rb_callinfo *ci)
{
#if USE_EMBED_CI
if (LIKELY(((VALUE)ci) & 0x01)) {
return 1;
}
else {
VM_ASSERT(IMEMO_TYPE_P(ci, imemo_callinfo));
return 0;
}
#else
return 0;
#endif
}
static inline bool
vm_ci_p(const struct rb_callinfo *ci)
{
if (vm_ci_packed_p(ci) || IMEMO_TYPE_P(ci, imemo_callinfo)) {
return 1;
}
else {
return 0;
}
}
static inline ID
vm_ci_mid(const struct rb_callinfo *ci)
{
if (vm_ci_packed_p(ci)) {
return (((VALUE)ci) >> CI_EMBED_ID_SHFT) & CI_EMBED_ID_MASK;
}
else {
return (ID)ci->mid;
}
}
static inline unsigned int
vm_ci_flag(const struct rb_callinfo *ci)
{
if (vm_ci_packed_p(ci)) {
return (unsigned int)((((VALUE)ci) >> CI_EMBED_FLAG_SHFT) & CI_EMBED_FLAG_MASK);
}
else {
return (unsigned int)ci->flag;
}
}
static inline unsigned int
vm_ci_argc(const struct rb_callinfo *ci)
{
if (vm_ci_packed_p(ci)) {
return (unsigned int)((((VALUE)ci) >> CI_EMBED_ARGC_SHFT) & CI_EMBED_ARGC_MASK);
}
else {
return (unsigned int)ci->argc;
}
}
static inline const struct rb_callinfo_kwarg *
vm_ci_kwarg(const struct rb_callinfo *ci)
{
if (vm_ci_packed_p(ci)) {
return NULL;
}
else {
return ci->kwarg;
}
}
static inline void
vm_ci_dump(const struct rb_callinfo *ci)
{
if (vm_ci_packed_p(ci)) {
ruby_debug_printf("packed_ci ID:%s flag:%x argc:%u\n",
rb_id2name(vm_ci_mid(ci)), vm_ci_flag(ci), vm_ci_argc(ci));
}
else {
rp(ci);
}
}
#define vm_ci_new(mid, flag, argc, kwarg) vm_ci_new_(mid, flag, argc, kwarg, __FILE__, __LINE__)
#define vm_ci_new_runtime(mid, flag, argc, kwarg) vm_ci_new_runtime_(mid, flag, argc, kwarg, __FILE__, __LINE__)
2022-01-27 12:43:38 +03:00
/* This is passed to STATIC_ASSERT. Cannot be an inline function. */
#define VM_CI_EMBEDDABLE_P(mid, flag, argc, kwarg) \
(((mid ) & ~CI_EMBED_ID_MASK) ? false : \
((flag) & ~CI_EMBED_FLAG_MASK) ? false : \
((argc) & ~CI_EMBED_ARGC_MASK) ? false : \
(kwarg) ? false : true)
#define vm_ci_new_id(mid, flag, argc, must_zero) \
((const struct rb_callinfo *) \
((((VALUE)(mid )) << CI_EMBED_ID_SHFT) | \
(((VALUE)(flag)) << CI_EMBED_FLAG_SHFT) | \
(((VALUE)(argc)) << CI_EMBED_ARGC_SHFT) | \
RUBY_FIXNUM_FLAG))
static inline const struct rb_callinfo *
vm_ci_new_(ID mid, unsigned int flag, unsigned int argc, const struct rb_callinfo_kwarg *kwarg, const char *file, int line)
{
#if USE_EMBED_CI
if (VM_CI_EMBEDDABLE_P(mid, flag, argc, kwarg)) {
RB_DEBUG_COUNTER_INC(ci_packed);
return vm_ci_new_id(mid, flag, argc, kwarg);
}
#endif
const bool debug = 0;
if (debug) ruby_debug_printf("%s:%d ", file, line);
// TODO: dedup
const struct rb_callinfo *ci = (const struct rb_callinfo *)
rb_imemo_new(imemo_callinfo,
(VALUE)mid,
(VALUE)flag,
(VALUE)argc,
(VALUE)kwarg);
if (debug) rp(ci);
if (kwarg) {
RB_DEBUG_COUNTER_INC(ci_kw);
}
else {
RB_DEBUG_COUNTER_INC(ci_nokw);
}
VM_ASSERT(vm_ci_flag(ci) == flag);
VM_ASSERT(vm_ci_argc(ci) == argc);
return ci;
}
static inline const struct rb_callinfo *
vm_ci_new_runtime_(ID mid, unsigned int flag, unsigned int argc, const struct rb_callinfo_kwarg *kwarg, const char *file, int line)
{
RB_DEBUG_COUNTER_INC(ci_runtime);
return vm_ci_new_(mid, flag, argc, kwarg, file, line);
}
#define VM_CALLINFO_NOT_UNDER_GC IMEMO_FL_USER0
static inline bool
vm_ci_markable(const struct rb_callinfo *ci)
{
if (! ci) {
return false; /* or true? This is Qfalse... */
}
else if (vm_ci_packed_p(ci)) {
return true;
}
else {
VM_ASSERT(IMEMO_TYPE_P(ci, imemo_callinfo));
return ! FL_ANY_RAW((VALUE)ci, VM_CALLINFO_NOT_UNDER_GC);
}
}
#define VM_CI_ON_STACK(mid_, flags_, argc_, kwarg_) \
(struct rb_callinfo) { \
.flags = T_IMEMO | \
(imemo_callinfo << FL_USHIFT) | \
VM_CALLINFO_NOT_UNDER_GC, \
.mid = mid_, \
.flag = flags_, \
.argc = argc_, \
.kwarg = kwarg_, \
}
typedef VALUE (*vm_call_handler)(
struct rb_execution_context_struct *ec,
struct rb_control_frame_struct *cfp,
struct rb_calling_info *calling);
// imemo_callcache
struct rb_callcache {
2020-02-22 03:59:23 +03:00
const VALUE flags;
/* inline cache: key */
const VALUE klass; // should not mark it because klass can not be free'd
// because of this marking. When klass is collected,
// cc will be cleared (cc->klass = 0) at vm_ccs_free().
/* inline cache: values */
const struct rb_callable_method_entry_struct * const cme_;
const vm_call_handler call_;
union {
struct {
uintptr_t value; // Shape ID in upper bits, index in lower bits
} attr;
const enum method_missing_reason method_missing_reason; /* used by method_missing */
VALUE v;
const struct rb_builtin_function *bf;
} aux_;
};
#define VM_CALLCACHE_UNMARKABLE FL_FREEZE
#define VM_CALLCACHE_ON_STACK FL_EXIVAR
extern const struct rb_callcache *rb_vm_empty_cc(void);
extern const struct rb_callcache *rb_vm_empty_cc_for_super(void);
#define vm_cc_empty() rb_vm_empty_cc()
static inline void vm_cc_attr_index_set(const struct rb_callcache *cc, attr_index_t index, shape_id_t dest_shape_id);
static inline void
vm_cc_attr_index_initialize(const struct rb_callcache *cc, shape_id_t shape_id)
{
vm_cc_attr_index_set(cc, (attr_index_t)-1, shape_id);
}
static inline const struct rb_callcache *
vm_cc_new(VALUE klass,
const struct rb_callable_method_entry_struct *cme,
vm_call_handler call)
{
const struct rb_callcache *cc = (const struct rb_callcache *)rb_imemo_new(imemo_callcache, (VALUE)cme, (VALUE)call, 0, klass);
vm_cc_attr_index_initialize(cc, INVALID_SHAPE_ID);
RB_DEBUG_COUNTER_INC(cc_new);
return cc;
}
#define VM_CC_ON_STACK(clazz, call, aux, cme) \
(struct rb_callcache) { \
.flags = T_IMEMO | \
(imemo_callcache << FL_USHIFT) | \
VM_CALLCACHE_UNMARKABLE | \
VM_CALLCACHE_ON_STACK, \
.klass = clazz, \
.cme_ = cme, \
.call_ = call, \
.aux_ = aux, \
}
static inline bool
vm_cc_class_check(const struct rb_callcache *cc, VALUE klass)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
VM_ASSERT(cc->klass == 0 ||
RB_TYPE_P(cc->klass, T_CLASS) || RB_TYPE_P(cc->klass, T_ICLASS));
return cc->klass == klass;
}
static inline int
vm_cc_markable(const struct rb_callcache *cc)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
return FL_TEST_RAW((VALUE)cc, VM_CALLCACHE_UNMARKABLE) == 0;
}
static inline const struct rb_callable_method_entry_struct *
vm_cc_cme(const struct rb_callcache *cc)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
VM_ASSERT(cc->call_ == NULL || // not initialized yet
!vm_cc_markable(cc) ||
cc->cme_ != NULL);
return cc->cme_;
}
static inline vm_call_handler
vm_cc_call(const struct rb_callcache *cc)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
2021-11-16 11:53:33 +03:00
VM_ASSERT(cc->call_ != NULL);
return cc->call_;
}
static inline attr_index_t
vm_cc_attr_index(const struct rb_callcache *cc)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
return (attr_index_t)((cc->aux_.attr.value & SHAPE_FLAG_MASK) - 1);
}
static inline shape_id_t
vm_cc_attr_index_dest_shape_id(const struct rb_callcache *cc)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
return cc->aux_.attr.value >> SHAPE_FLAG_SHIFT;
}
static inline void
vm_cc_atomic_shape_and_index(const struct rb_callcache *cc, shape_id_t * shape_id, attr_index_t * index)
{
uintptr_t cache_value = cc->aux_.attr.value; // Atomically read 64 bits
*shape_id = (shape_id_t)(cache_value >> SHAPE_FLAG_SHIFT);
*index = (attr_index_t)(cache_value & SHAPE_FLAG_MASK) - 1;
return;
}
static inline void
vm_ic_atomic_shape_and_index(const struct iseq_inline_iv_cache_entry *ic, shape_id_t * shape_id, attr_index_t * index)
{
uintptr_t cache_value = ic->value; // Atomically read 64 bits
*shape_id = (shape_id_t)(cache_value >> SHAPE_FLAG_SHIFT);
*index = (attr_index_t)(cache_value & SHAPE_FLAG_MASK) - 1;
return;
}
static inline shape_id_t
vm_ic_attr_index_dest_shape_id(const struct iseq_inline_iv_cache_entry *ic)
{
return (shape_id_t)(ic->value >> SHAPE_FLAG_SHIFT);
}
static inline unsigned int
vm_cc_cmethod_missing_reason(const struct rb_callcache *cc)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
return cc->aux_.method_missing_reason;
}
static inline bool
vm_cc_invalidated_p(const struct rb_callcache *cc)
{
if (cc->klass && !METHOD_ENTRY_INVALIDATED(vm_cc_cme(cc))) {
return false;
}
else {
return true;
}
}
2023-03-07 10:15:30 +03:00
// For RJIT. cc_cme is supposed to have inlined `vm_cc_cme(cc)`.
static inline bool
vm_cc_valid_p(const struct rb_callcache *cc, const rb_callable_method_entry_t *cc_cme, VALUE klass)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
if (cc->klass == klass && !METHOD_ENTRY_INVALIDATED(cc_cme)) {
return 1;
}
else {
return 0;
}
}
2020-10-23 01:20:35 +03:00
/* callcache: mutate */
#define VM_CALLCACH_IVAR IMEMO_FL_USER0
#define VM_CALLCACH_BF IMEMO_FL_USER1
static inline void
vm_cc_call_set(const struct rb_callcache *cc, vm_call_handler call)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
VM_ASSERT(cc != vm_cc_empty());
*(vm_call_handler *)&cc->call_ = call;
}
static inline void
vm_cc_attr_index_set(const struct rb_callcache *cc, attr_index_t index, shape_id_t dest_shape_id)
{
uintptr_t *attr_value = (uintptr_t *)&cc->aux_.attr.value;
if (!vm_cc_markable(cc)) {
*attr_value = (uintptr_t)INVALID_SHAPE_ID << SHAPE_FLAG_SHIFT;
return;
}
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
VM_ASSERT(cc != vm_cc_empty());
*attr_value = (attr_index_t)(index + 1) | ((uintptr_t)(dest_shape_id) << SHAPE_FLAG_SHIFT);
*(VALUE *)&cc->flags |= VM_CALLCACH_IVAR;
}
static inline bool
vm_cc_ivar_p(const struct rb_callcache *cc)
{
return (cc->flags & VM_CALLCACH_IVAR) != 0;
}
static inline void
vm_ic_attr_index_set(const rb_iseq_t *iseq, const struct iseq_inline_iv_cache_entry *ic, attr_index_t index, shape_id_t dest_shape_id)
{
*(uintptr_t *)&ic->value = ((uintptr_t)dest_shape_id << SHAPE_FLAG_SHIFT) | (attr_index_t)(index + 1);
}
static inline void
vm_ic_attr_index_initialize(const struct iseq_inline_iv_cache_entry *ic, shape_id_t shape_id)
{
*(uintptr_t *)&ic->value = (uintptr_t)shape_id << SHAPE_FLAG_SHIFT;
}
static inline void
vm_cc_method_missing_reason_set(const struct rb_callcache *cc, enum method_missing_reason reason)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
VM_ASSERT(cc != vm_cc_empty());
*(enum method_missing_reason *)&cc->aux_.method_missing_reason = reason;
}
static inline void
vm_cc_bf_set(const struct rb_callcache *cc, const struct rb_builtin_function *bf)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
VM_ASSERT(cc != vm_cc_empty());
*(const struct rb_builtin_function **)&cc->aux_.bf = bf;
*(VALUE *)&cc->flags |= VM_CALLCACH_BF;
}
static inline bool
vm_cc_bf_p(const struct rb_callcache *cc)
{
return (cc->flags & VM_CALLCACH_BF) != 0;
}
static inline void
vm_cc_invalidate(const struct rb_callcache *cc)
{
VM_ASSERT(IMEMO_TYPE_P(cc, imemo_callcache));
VM_ASSERT(cc != vm_cc_empty());
VM_ASSERT(cc->klass != 0); // should be enable
*(VALUE *)&cc->klass = 0;
RB_DEBUG_COUNTER_INC(cc_ent_invalidate);
}
/* calldata */
struct rb_call_data {
const struct rb_callinfo *ci;
const struct rb_callcache *cc;
};
struct rb_class_cc_entries {
#if VM_CHECK_MODE > 0
VALUE debug_sig;
#endif
int capa;
int len;
const struct rb_callable_method_entry_struct *cme;
struct rb_class_cc_entries_entry {
const struct rb_callinfo *ci;
const struct rb_callcache *cc;
} *entries;
};
#if VM_CHECK_MODE > 0
const rb_callable_method_entry_t *rb_vm_lookup_overloaded_cme(const rb_callable_method_entry_t *cme);
void rb_vm_dump_overloaded_cme_table(void);
static inline bool
vm_ccs_p(const struct rb_class_cc_entries *ccs)
{
return ccs->debug_sig == ~(VALUE)ccs;
}
`Primitive.mandatory_only?` for fast path Compare with the C methods, A built-in methods written in Ruby is slower if only mandatory parameters are given because it needs to check the argumens and fill default values for optional and keyword parameters (C methods can check the number of parameters with `argc`, so there are no overhead). Passing mandatory arguments are common (optional arguments are exceptional, in many cases) so it is important to provide the fast path for such common cases. `Primitive.mandatory_only?` is a special builtin function used with `if` expression like that: ```ruby def self.at(time, subsec = false, unit = :microsecond, in: nil) if Primitive.mandatory_only? Primitive.time_s_at1(time) else Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in)) end end ``` and it makes two ISeq, ``` def self.at(time, subsec = false, unit = :microsecond, in: nil) Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in)) end def self.at(time) Primitive.time_s_at1(time) end ``` and (2) is pointed by (1). Note that `Primitive.mandatory_only?` should be used only in a condition of an `if` statement and the `if` statement should be equal to the methdo body (you can not put any expression before and after the `if` statement). A method entry with `mandatory_only?` (`Time.at` on the above case) is marked as `iseq_overload`. When the method will be dispatch only with mandatory arguments (`Time.at(0)` for example), make another method entry with ISeq (2) as mandatory only method entry and it will be cached in an inline method cache. The idea is similar discussed in https://bugs.ruby-lang.org/issues/16254 but it only checks mandatory parameters or more, because many cases only mandatory parameters are given. If we find other cases (optional or keyword parameters are used frequently and it hurts performance), we can extend the feature.
2021-11-12 20:12:20 +03:00
static inline bool
vm_cc_check_cme(const struct rb_callcache *cc, const rb_callable_method_entry_t *cme)
{
if (vm_cc_cme(cc) == cme ||
(cme->def->iseq_overload && vm_cc_cme(cc) == rb_vm_lookup_overloaded_cme(cme))) {
`Primitive.mandatory_only?` for fast path Compare with the C methods, A built-in methods written in Ruby is slower if only mandatory parameters are given because it needs to check the argumens and fill default values for optional and keyword parameters (C methods can check the number of parameters with `argc`, so there are no overhead). Passing mandatory arguments are common (optional arguments are exceptional, in many cases) so it is important to provide the fast path for such common cases. `Primitive.mandatory_only?` is a special builtin function used with `if` expression like that: ```ruby def self.at(time, subsec = false, unit = :microsecond, in: nil) if Primitive.mandatory_only? Primitive.time_s_at1(time) else Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in)) end end ``` and it makes two ISeq, ``` def self.at(time, subsec = false, unit = :microsecond, in: nil) Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in)) end def self.at(time) Primitive.time_s_at1(time) end ``` and (2) is pointed by (1). Note that `Primitive.mandatory_only?` should be used only in a condition of an `if` statement and the `if` statement should be equal to the methdo body (you can not put any expression before and after the `if` statement). A method entry with `mandatory_only?` (`Time.at` on the above case) is marked as `iseq_overload`. When the method will be dispatch only with mandatory arguments (`Time.at(0)` for example), make another method entry with ISeq (2) as mandatory only method entry and it will be cached in an inline method cache. The idea is similar discussed in https://bugs.ruby-lang.org/issues/16254 but it only checks mandatory parameters or more, because many cases only mandatory parameters are given. If we find other cases (optional or keyword parameters are used frequently and it hurts performance), we can extend the feature.
2021-11-12 20:12:20 +03:00
return true;
}
else {
#if 1
// debug print
fprintf(stderr, "iseq_overload:%d\n", (int)cme->def->iseq_overload);
rp(cme);
rp(vm_cc_cme(cc));
rb_vm_lookup_overloaded_cme(cme);
`Primitive.mandatory_only?` for fast path Compare with the C methods, A built-in methods written in Ruby is slower if only mandatory parameters are given because it needs to check the argumens and fill default values for optional and keyword parameters (C methods can check the number of parameters with `argc`, so there are no overhead). Passing mandatory arguments are common (optional arguments are exceptional, in many cases) so it is important to provide the fast path for such common cases. `Primitive.mandatory_only?` is a special builtin function used with `if` expression like that: ```ruby def self.at(time, subsec = false, unit = :microsecond, in: nil) if Primitive.mandatory_only? Primitive.time_s_at1(time) else Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in)) end end ``` and it makes two ISeq, ``` def self.at(time, subsec = false, unit = :microsecond, in: nil) Primitive.time_s_at(time, subsec, unit, Primitive.arg!(:in)) end def self.at(time) Primitive.time_s_at1(time) end ``` and (2) is pointed by (1). Note that `Primitive.mandatory_only?` should be used only in a condition of an `if` statement and the `if` statement should be equal to the methdo body (you can not put any expression before and after the `if` statement). A method entry with `mandatory_only?` (`Time.at` on the above case) is marked as `iseq_overload`. When the method will be dispatch only with mandatory arguments (`Time.at(0)` for example), make another method entry with ISeq (2) as mandatory only method entry and it will be cached in an inline method cache. The idea is similar discussed in https://bugs.ruby-lang.org/issues/16254 but it only checks mandatory parameters or more, because many cases only mandatory parameters are given. If we find other cases (optional or keyword parameters are used frequently and it hurts performance), we can extend the feature.
2021-11-12 20:12:20 +03:00
#endif
return false;
}
}
#endif
// gc.c
void rb_vm_ccs_free(struct rb_class_cc_entries *ccs);
#endif /* RUBY_VM_CALLINFO_H */