Граф коммитов

16 Коммитов

Автор SHA1 Сообщение Дата
Koichi Sasada d578684989 `rb_thread_lock_native_thread()`
Introduce `rb_thread_lock_native_thread()` to allocate dedicated
native thread to the current Ruby thread for M:N threads.
This C API is similar to Go's `runtime.LockOSThread()`.

Accepted at https://github.com/ruby/dev-meeting-log/blob/master/2023/DevMeeting-2023-08-24.md
(and missed to implement on Ruby 3.3.0)
2024-02-21 15:38:29 +09:00
Kazuhiro NISHIYAMA c54622c657
Fix a warning with USE_RUBY_DEBUG_LOG=1 on macOS
```
compiling ../thread.c
In file included from ../thread.c:263:
In file included from ../thread_pthread.c:2870:
../thread_pthread_mn.c:777:43: warning: format specifies type 'unsigned long' but the argument has type 'rb_hrtime_t' (aka 'unsigned long long') [-Wformat]
		RUBY_DEBUG_LOG("abs:%lu", abs);
				    ~~~   ^~~
				    %llu
../vm_debug.h:110:74: note: expanded from macro 'RUBY_DEBUG_LOG'
	ruby_debug_log(__FILE__, __LINE__, RUBY_FUNCTION_NAME_STRING, "" __VA_ARGS__); \
									 ^~~~~~~~~~~
1 warning generated.
```
2024-02-14 10:40:26 +09:00
KJ Tsanaktsidis 719db18b50 notify ASAN about M:N threading stack switches
In a similar way to how we do it with fibers in cont.c, we need to call
__sanitize_start_switch_fiber and __sanitize_finish_switch_fiber around
the call to coroutine_transfer to let ASAN save & restore the fake stack
pointer.

When a M:N thread is exiting, we pass `to_dead` to the new
coroutine_transfer0 function, so that we can pass NULL for saving the
stack pointer. This signals to ASAN that the fake stack can be freed
(otherwise it would be leaked)

[Bug #20220]
2024-02-06 22:23:42 +11:00
Nobuyoshi Nakada c30b8ae947
Adjust styles and indents [ci skip] 2024-01-08 00:50:41 +09:00
Koichi Sasada 82015496b9 MN: access `timer_th.waiting` with lock
`timer_th.waiting` should be protected by `timer_th.waiting_lock`
2023-12-24 10:57:27 +09:00
JP Camara a2ebf9cc63 Replicate EEXIST epoll_ctl behavior in kqueue
* In the epoll implementation, you get an EEXIST if you try to register the same event for the same fd more than once for a particular epoll instance

* Otherwise kevent will just override the previous event registration, and if multiple threads listen on the same fd only the last one to register will ever finish, the others are stuck

* This approach will lead to native threads getting created, similar to the epoll implementation. This is not ideal, but it fixes certain test cases for now, like test/socket/test_tcp.rb#test_accept_multithread
2023-12-24 07:07:11 +09:00
Koichi Sasada 4927f25148 skip `MAP_STACK` on FreeBSD 2023-12-20 17:00:55 +09:00
JP Camara 256f34ab6b Hand thread into `thread_sched_wait_events_timeval`
* When we have the thread already, it saves a lookup

* `event_wait`, not `kq`

Clean up the `thread_sched_wait_events_timeval` calls

* By handling the PTHREAD check inside the function, all the other code can become much simpler and just call the function directly without additional checks
2023-12-20 16:23:38 +09:00
JP Camara 8782e02138 KQueue support for M:N threads
* Allows macOS users to use M:N threads (and technically FreeBSD, though it has not been verified on FreeBSD)

* Include sys/event.h header check for macros, and include sys/event.h when present

* Rename epoll_fd to more generic kq_fd (Kernel event Queue) for use by both epoll and kqueue

* MAP_STACK is not available on macOS so conditionall apply it to mmap flags

* Set fd to close on exec

* Log debug messages specific to kqueue and epoll on creation

* close_invalidate raises an error for the kqueue fd on child process fork. It's unclear rn if that's a bug, or if it's kqueue specific behavior

Use kq with rb_thread_wait_for_single_fd

* Only platforms with `USE_POLL` (linux) had changes applied to take advantage of kernel event queues. It needed to be applied to the `select` so that kqueue could be properly applied

* Clean up kqueue specific code and make sure only flags that were actually set are removed (or an error is raised)

* Also handle kevent specific errnos, since most don't apply from epoll to kqueue

* Use the more platform standard close-on-exec approach of `fcntl` and `FD_CLOEXEC`. The io-event gem uses `ioctl`, but fcntl seems to be the recommended choice. It is also what Go, Bun, and Libuv use

* We're making changes in this file anyways - may as well fix a couple spelling mistakes while here

Make sure FD_CLOEXEC carries over in dup

* Otherwise the kqueue descriptor should have FD_CLOEXEC, but doesn't and fails in assert_close_on_exec
2023-12-20 16:23:38 +09:00
John Hawthorn b2ad4fec1a Add missing GVL hooks for M:N threads and ractors 2023-12-09 09:31:41 -08:00
John Hawthorn 85bc80a51b Revert "Add missing GVL hooks for M:N threads and ractors"
This reverts commit ad54fbf281.
2023-12-03 18:37:06 -08:00
John Hawthorn ad54fbf281 Add missing GVL hooks for M:N threads and ractors
[Bug #20019]

This fixes GVL instrumentation in three locations it was missing:
- Suspending when blocking on a Ractor
- Suspending when doing a coroutine transfer from an M:N thread
- Resuming after an M:N thread starts

Co-authored-by: Matthew Draper <matthew@trebex.net>
2023-12-02 10:06:07 -08:00
Nobuyoshi Nakada fbd55120f3
Cast up before multiplication 2023-10-29 21:27:49 +09:00
Koichi Sasada d8a74207e7 use `uint32_t` instead of `__uint32_t` 2023-10-13 15:59:09 +09:00
Koichi Sasada 10ba3fc302 Use `sysconf()` to get PAGE_SIZE
Some systems use not 4096 page size (64KB for example).
2023-10-13 01:09:41 +09:00
Koichi Sasada be1bbd5b7d M:N thread scheduler for Ractors
This patch introduce M:N thread scheduler for Ractor system.

In general, M:N thread scheduler employs N native threads (OS threads)
to manage M user-level threads (Ruby threads in this case).
On the Ruby interpreter, 1 native thread is provided for 1 Ractor
and all Ruby threads are managed by the native thread.

From Ruby 1.9, the interpreter uses 1:1 thread scheduler which means
1 Ruby thread has 1 native thread. M:N scheduler change this strategy.

Because of compatibility issue (and stableness issue of the implementation)
main Ractor doesn't use M:N scheduler on default. On the other words,
threads on the main Ractor will be managed with 1:1 thread scheduler.

There are additional settings by environment variables:

`RUBY_MN_THREADS=1` enables M:N thread scheduler on the main ractor.
Note that non-main ractors use the M:N scheduler without this
configuration. With this configuration, single ractor applications
run threads on M:1 thread scheduler (green threads, user-level threads).

`RUBY_MAX_CPU=n` specifies maximum number of native threads for
M:N scheduler (default: 8).

This patch will be reverted soon if non-easy issues are found.

[Bug #19842]
2023-10-12 14:47:01 +09:00