Previously we were passing the memory_id. This was broken previously if
compaction was run (which changes the memory_id) and now that object_id
is a monotonically increasing number it was always broken.
This commit fixes this by defering removal from the object_id table
until finalizers have run (for objects with finalizers) and also copying
the SEEN_OBJ_ID flag onto the zombie objects.
This changes object_id from being based on the objects location in
memory (or a nearby memory location in the case of a conflict) to be
based on an always increasing number.
This number is a Ruby Integer which allows it to overflow the size of a
pointer without issue (very unlikely to happen in real programs
especially on 64-bit, but a nice guarantee).
This changes obj_to_id_tbl and id_to_obj_tbl to both be maps of Ruby
objects to Ruby objects (previously they were Ruby object to C integer)
which simplifies updating them after compaction as we can run them
through gc_update_table_refs.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This changes object_id from being based on the objects location in
memory (or a nearby memory location in the case of a conflict) to be
based on an always increasing number.
This number is a Ruby Integer which allows it to overflow the size of a
pointer without issue (very unlikely to happen in real programs
especially on 64-bit, but a nice guarantee).
This changes obj_to_id_tbl and id_to_obj_tbl to both be maps of Ruby
objects to Ruby objects (previously they were Ruby object to C integer)
which simplifies updating them after compaction as we can run them
through gc_update_table_refs.
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
This commit is to attempt fixing this error:
http://ci.rvm.jp/results/trunk-gc-asserts@ruby-sky1/2353281
Each non-full heap_page struct contains a reference to the next page
that contains free slots. Compaction could fill any page, including
pages that happen to be linked to as "pages which contain free slots".
To fix this, we'll iterate each page, and rebuild the "free page list"
depending on the number of actual free slots on that page. If there are
no free slots on the page, we'll set the free_next pointer to NULL.
Finally we'll pop one page off the "free page list" and set it as the
"using page" for the next allocation.
This commit is to attempt fixing this error:
http://ci.rvm.jp/results/trunk-gc-asserts@ruby-sky1/2353281
Each non-full heap_page struct contains a reference to the next page
that contains free slots. Compaction could fill any page, including
pages that happen to be linked to as "pages which contain free slots".
To fix this, we'll iterate each page, and rebuild the "free page list"
depending on the number of actual free slots on that page. If there are
no free slots on the page, we'll set the free_next pointer to NULL.
Finally we'll pop one page off the "free page list" and set it as the
"using page" for the next allocation.
When we compact the heap, various st tables are updated, particularly
the table that contains the object id map. Updating an st table can
cause a GC to occur, and we need to prevent any GC from happening while
moving or updating references.
This reverts commit 60a7f9f446.
We can't have Ruby objects pointing at T_ZOMBIE objects otherwise we get
an error in the GC. We need to find a different way to update
references.
When we run finalizers we have to copy all of the finalizers to a new
data structure because a finalizer could add another finalizer and we
need to keep draining the "real" finalizer table until it's empty.
We don't want Ruby programs to mutate the finalizers that we're
iterating over as well.
Before this commit we would copy the finalizers in to a linked list.
The problem with this approach is that if compaction happens, the linked
list will need to be updated. But the GC doesn't know about the
existence of the linked list, so it could not update references. This
commit changes the linked list to be a Ruby array so that when
compaction happens, the arrays will automatically be updated and all
references remain valid.
Simple comparison between proc/ifunc/method invocations:
```
proc 15.209M (± 1.6%) i/s - 76.138M in 5.007413s
ifunc 15.195M (± 1.7%) i/s - 76.257M in 5.020106s
method 9.836M (± 1.2%) i/s - 49.272M in 5.009984s
```
As `proc` and `ifunc` have no significant difference, chosen the
latter for arity check.
Requested by ko1 that ability of calling rb_raise from anywhere
outside of GVL is "too much". Give up that part, move the GVL
aquisition routine into gc.c, and make our new gc_raise().
Now that allocation routines like ALLOC_N() can raise exceptions
on integer overflows. This is a problem when the calling thread
has no GVL. Memory allocations has been allowed without it, but
can still fail.
Let's just relax rb_raise's restriction so that we can call it
with or without GVL. With GVL the behaviour is unchanged. With
no GVL, wait for it.
Also, integer overflows can theoretically occur during GC when
we expand the object space. We cannot do so much then. Call
rb_memerror and let that routine abort the process.
This changeset is to kill future possibility of bugs similar to
CVE-2019-11932. The vulnerability occurs when reallocarray(3)
(which is a variant of realloc(3) and roughly resembles our
ruby_xmalloc2()) returns NULL. In our C API, ruby_xmalloc()
never returns NULL to raise NoMemoryError instead. ruby_xfree()
does not return NULL by definition. ruby_xrealloc() on the other
hand, _did_ return NULL, _and_ also raised sometimes. It is very
confusing. Let's not do that. x-series APIs shall raise on
error and shall not return NULL.
This changeset basically replaces `ruby_xmalloc(x * y)` into
`ruby_xmalloc2(x, y)`. Some convenient functions are also
provided for instance `rb_xmalloc_mul_add(x, y, z)` which allocates
x * y + z byes.
I'd like to call `gc_compact` after major GC, but before the GC
finishes. This means we can't allocate any objects inside `gc_compact`.
So in this commit I'm just pulling the compaction statistics allocation
outside the `gc_compact` function so we can safely call it.
This function has been used wrongly always at first, "allocate a
buffer then wrap it with tmpbuf". This order can cause a memory
leak, as tmpbuf creation also can raise a NoMemoryError exception.
The right order is "create a tmpbuf then allocate&wrap a buffer".
So the argument of this function is rather harmful than just
useless.
TODO:
* Rename this function to more proper name, as it is not used
"temporary" (function local) purpose.
* Allocate and wrap at once safely, like `ALLOCV`.
This reverts commits: 10d6a3aca78ba48c1b85fba8627dc1dd883de5ba6c6a25feca167e6b48f17cb96d41a53207979278595b3c4fdd1521f7cf89c11c5e69accf336082033632a812c0f56506be0d86427a3219 .
The reason for the revert is that we observe ABA problem around
inline method cache. When a cache misshits, we search for a
method entry. And if the entry is identical to what was cached
before, we reuse the cache. But the commits we are reverting here
introduced situations where a method entry is freed, then the
identical memory region is used for another method entry. An
inline method cache cannot detect that ABA.
Here is a code that reproduce such situation:
```ruby
require 'prime'
class << Integer
alias org_sqrt sqrt
def sqrt(n)
raise
end
GC.stress = true
Prime.each(7*37){} rescue nil # <- Here we populate CC
class << Object.new; end
# These adjacent remove-then-alias maneuver
# frees a method entry, then immediately
# reuses it for another.
remove_method :sqrt
alias sqrt org_sqrt
end
Prime.each(7*37).to_a # <- SEGV
```
Now that we have eliminated most destructive operations over the
rb_method_entry_t / rb_callable_method_entry_t, let's make them
mostly immutabe and mark them const.
One exception is rb_export_method(), which destructively modifies
visibilities of method entries. I have left that operation as is
because I suspect that destructiveness is the nature of that
function.
Most (if not all) of the fields of rb_method_definition_t are never
meant to be modified once after they are stored. Marking them const
makes it possible for compilers to warn on unintended modifications.
[feature #16035]
This goes one step farther than what nobu did in [feature #13498]
With this patch, special objects such as static symbols, integers, etc can be used as either key or values inside WeakMap. They simply don't have a finalizer defined on them.
This is useful if you need to deduplicate value objects
After 5e86b005c0, I now think ANYARGS is
dangerous and should be extinct. This commit deletes ANYARGS from
st_foreach. I strongly believe that this commit should have had come
with b0af0592fd, which added extra
parameter to st_foreach callbacks.
After 5e86b005c0, I now think ANYARGS is
dangerous and should be extinct. This commit deletes ANYARGS from
rb_proc_new / rb_fiber_new, and applies RB_BLOCK_CALL_FUNC_ARGLIST
wherever necessary.
After 5e86b005c0, I now think ANYARGS is
dangerous and should be extinct. This commit deletes ANYARGS from
rb_ensure, which also revealed many arity / type mismatches.
I'm afraid the keys to this hash are just integers, and those integers
may look like VALUE pointers when they are not. Since we don't mark the
keys to this hash, it's probably safe to say that none of them have
moved, so we shouldn't try to update the references either.
As `rb_objspace_each_objects_without_setup` doesn't reset and
restore `dont_incremental` flag, renamed the bare iterator as
`objspace_each_objects_without_setup`. `objspace_each_objects`
calls it when called with the flag disabled, wrap the arguments
otherwise only.