Because a thread calling IO#close now blocks in a native condvar wait,
it's possible for there to be _no_ threads left to actually handle
incoming signals/ubf calls/etc.
This manifested as failing tests on Solaris 10 (SPARC), because:
* One thread called IO#close, which sent a SIGVTALRM to the other
thread to interrupt it, and then waited on the condvar to be notified
that the reading thread was done.
* One thread was calling IO#read, but it hadn't yet reached the actual
call to select(2) when the SIGVTALRM arrived, so it never unblocked
itself.
This results in a deadlock.
The fix is to use a real Ruby mutex for the close lock; that way, the
closing thread goes into sigwait-sleep and can keep trying to interrupt
the select(2) thread.
See the discussion in: https://github.com/ruby/ruby/pull/7865/
* Add rb_io_path and rb_io_open_descriptor.
* Use rb_io_open_descriptor to create PTY objects
* Rename FMODE_PREP -> FMODE_EXTERNAL and expose it
FMODE_PREP I believe refers to the concept of a "pre-prepared" file, but
FMODE_EXTERNAL is clearer about what the file descriptor represents and
aligns with language in the IO::Buffer module.
* Ensure that rb_io_open_descriptor closes the FD if it fails
If FMODE_EXTERNAL is not set, then it's guaranteed that Ruby will be
responsible for closing your file, eventually, if you pass it to
rb_io_open_descriptor, even if it raises an exception.
* Rename IS_EXTERNAL_FD -> RUBY_IO_EXTERNAL_P
* Expose `rb_io_closed_p`.
* Add `rb_io_mode` to get IO mode.
---------
Co-authored-by: KJ Tsanaktsidis <ktsanaktsidis@zendesk.com>
* Documentation consistency.
* Improve consistency of `pread`/`pwrite` implementation when given length.
* Remove HAVE_PREAD / HAVE_PWRITE - it is no longer optional.
When one thread is closing a file descriptor whilst another thread is
concurrently reading it, we need to wait for the reading thread to be
done with it to prevent a potential EBADF (or, worse, file descriptor
reuse).
At the moment, that is done by keeping a list of threads still using the
file descriptor in io_close_fptr. It then continually calls
rb_thread_schedule() in fptr_finalize_flush until said list is empty.
That busy-looping seems to behave rather poorly on some OS's,
particulary FreeBSD. It can cause the TestIO#test_race_gets_and_close
test to fail (even with its very long 200 second timeout) because the
closing thread starves out the using thread.
To fix that, I introduce the concept of struct rb_io_close_wait_list; a
list of threads still using a file descriptor that we want to close. We
call `rb_notify_fd_close` to let the thread scheduler know we're closing
a FD, which fills the list with threads. Then, we call
rb_notify_fd_close_wait which will block the thread until all of the
still-using threads are done.
This is implemented with a condition variable sleep, so no busy-looping
is required.
This was already the behavior when a single `'external:internal'`
encoding specifier string was passed. This makes the behavior
consistent for the case where separate external and internal
encoding specifiers are provided.
While here, fix the IO#set_encoding method documentation to
state that either the first or second argument can be a string
with an encoding name, and describe the behavior when the
external encoding is binary.
Fixes [Bug #18899]
When `maxlen` is `nil`, it uses the data mode of the stream.
For example in the following:
```ruby
File.binwrite("a.txt", "\r\n\r")
p File.open("a.txt", "rt").read # "\n\n"
p File.open("a.txt", "rt").read(3) # "\r\n\r"
```
Note, this newline translation is _not_ specific to Windows.
[Feature #6047]
Currently it's grown by `BUFSIZ` (1024) on every iteration which is bit wasteful.
Instead we can double the capacity whenever there is less than `BUFSIZ` capacity
left.
Moves Expect library doc into io.c.
Changes certain links to local sections, now pointing to sections in doc/io_streams.rdoc.
Removes local sections now superseded by sections in doc/io_streams.rdoc.