Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
|
|
|
/*
|
|
|
|
* Shared application/kernel submission and completion ring pairs, for
|
|
|
|
* supporting fast/efficient IO.
|
|
|
|
*
|
|
|
|
* A note on the read/write ordering memory barriers that are matched between
|
2019-04-25 00:54:16 +03:00
|
|
|
* the application and kernel side.
|
|
|
|
*
|
|
|
|
* After the application reads the CQ ring tail, it must use an
|
|
|
|
* appropriate smp_rmb() to pair with the smp_wmb() the kernel uses
|
|
|
|
* before writing the tail (using smp_load_acquire to read the tail will
|
|
|
|
* do). It also needs a smp_mb() before updating CQ head (ordering the
|
|
|
|
* entry load(s) with the head store), pairing with an implicit barrier
|
2021-05-17 00:58:11 +03:00
|
|
|
* through a control-dependency in io_get_cqe (smp_store_release to
|
2019-04-25 00:54:16 +03:00
|
|
|
* store head will do). Failure to do so could lead to reading invalid
|
|
|
|
* CQ entries.
|
|
|
|
*
|
|
|
|
* Likewise, the application must use an appropriate smp_wmb() before
|
|
|
|
* writing the SQ tail (ordering SQ entry stores with the tail store),
|
|
|
|
* which pairs with smp_load_acquire in io_get_sqring (smp_store_release
|
|
|
|
* to store the tail will do). And it needs a barrier ordering the SQ
|
|
|
|
* head load before writing new SQ entries (smp_load_acquire to read
|
|
|
|
* head will do).
|
|
|
|
*
|
|
|
|
* When using the SQ poll thread (IORING_SETUP_SQPOLL), the application
|
|
|
|
* needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*
|
|
|
|
* updating the SQ tail; a full memory barrier smp_mb() is needed
|
|
|
|
* between.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
*
|
|
|
|
* Also see the examples in the liburing library:
|
|
|
|
*
|
|
|
|
* git://git.kernel.dk/liburing
|
|
|
|
*
|
|
|
|
* io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
|
|
|
|
* from data shared between the kernel and application. This is done both
|
|
|
|
* for ordering purposes, but also to ensure that once a value is loaded from
|
|
|
|
* data that the application could potentially modify, it remains stable.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2018-2019 Jens Axboe
|
2019-01-11 19:43:02 +03:00
|
|
|
* Copyright (c) 2018-2019 Christoph Hellwig
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/syscalls.h>
|
|
|
|
#include <linux/compat.h>
|
2020-02-27 20:15:42 +03:00
|
|
|
#include <net/compat.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#include <linux/refcount.h>
|
|
|
|
#include <linux/uio.h>
|
2020-01-18 20:22:41 +03:00
|
|
|
#include <linux/bits.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
#include <linux/sched/signal.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/file.h>
|
|
|
|
#include <linux/fdtable.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/percpu.h>
|
|
|
|
#include <linux/slab.h>
|
2022-01-05 20:05:15 +03:00
|
|
|
#include <linux/blk-mq.h>
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
#include <linux/bvec.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#include <linux/net.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/af_unix.h>
|
2019-01-11 08:13:58 +03:00
|
|
|
#include <net/scm.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#include <linux/anon_inodes.h>
|
|
|
|
#include <linux/sched/mm.h>
|
|
|
|
#include <linux/uaccess.h>
|
|
|
|
#include <linux/nospec.h>
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
#include <linux/sizes.h>
|
|
|
|
#include <linux/hugetlb.h>
|
2019-11-29 20:14:00 +03:00
|
|
|
#include <linux/highmem.h>
|
2019-12-11 21:20:36 +03:00
|
|
|
#include <linux/fsnotify.h>
|
2019-12-26 08:03:45 +03:00
|
|
|
#include <linux/fadvise.h>
|
2020-01-09 01:18:09 +03:00
|
|
|
#include <linux/eventpoll.h>
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 19:52:41 +03:00
|
|
|
#include <linux/task_work.h>
|
2020-05-22 18:24:42 +03:00
|
|
|
#include <linux/pagemap.h>
|
2020-09-13 22:09:39 +03:00
|
|
|
#include <linux/io_uring.h>
|
2021-02-17 03:46:48 +03:00
|
|
|
#include <linux/audit.h>
|
lsm,io_uring: add LSM hooks to io_uring
A full expalantion of io_uring is beyond the scope of this commit
description, but in summary it is an asynchronous I/O mechanism
which allows for I/O requests and the resulting data to be queued
in memory mapped "rings" which are shared between the kernel and
userspace. Optionally, io_uring offers the ability for applications
to spawn kernel threads to dequeue I/O requests from the ring and
submit the requests in the kernel, helping to minimize the syscall
overhead. Rings are accessed in userspace by memory mapping a file
descriptor provided by the io_uring_setup(2), and can be shared
between applications as one might do with any open file descriptor.
Finally, process credentials can be registered with a given ring
and any process with access to that ring can submit I/O requests
using any of the registered credentials.
While the io_uring functionality is widely recognized as offering a
vastly improved, and high performing asynchronous I/O mechanism, its
ability to allow processes to submit I/O requests with credentials
other than its own presents a challenge to LSMs. When a process
creates a new io_uring ring the ring's credentials are inhertied
from the calling process; if this ring is shared with another
process operating with different credentials there is the potential
to bypass the LSMs security policy. Similarly, registering
credentials with a given ring allows any process with access to that
ring to submit I/O requests with those credentials.
In an effort to allow LSMs to apply security policy to io_uring I/O
operations, this patch adds two new LSM hooks. These hooks, in
conjunction with the LSM anonymous inode support previously
submitted, allow an LSM to apply access control policy to the
sharing of io_uring rings as well as any io_uring credential changes
requested by a process.
The new LSM hooks are described below:
* int security_uring_override_creds(cred)
Controls if the current task, executing an io_uring operation,
is allowed to override it's credentials with @cred. In cases
where the current task is a user application, the current
credentials will be those of the user application. In cases
where the current task is a kernel thread servicing io_uring
requests the current credentials will be those of the io_uring
ring (inherited from the process that created the ring).
* int security_uring_sqpoll(void)
Controls if the current task is allowed to create an io_uring
polling thread (IORING_SETUP_SQPOLL). Without a SQPOLL thread
in the kernel processes must submit I/O requests via
io_uring_enter(2) which allows us to compare any requested
credential changes against the application making the request.
With a SQPOLL thread, we can no longer compare requested
credential changes against the application making the request,
the comparison is made against the ring's credentials.
Signed-off-by: Paul Moore <paul@paul-moore.com>
2021-02-02 03:56:49 +03:00
|
|
|
#include <linux/security.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
|
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include <trace/events/io_uring.h>
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#include <uapi/linux/io_uring.h>
|
|
|
|
|
2019-10-24 16:25:42 +03:00
|
|
|
#include "io-wq.h"
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-05-24 19:56:14 +03:00
|
|
|
#include "io_uring_types.h"
|
2022-05-24 21:45:38 +03:00
|
|
|
#include "io_uring.h"
|
2022-05-26 05:31:09 +03:00
|
|
|
#include "opdef.h"
|
2022-05-25 17:56:52 +03:00
|
|
|
#include "refs.h"
|
2022-05-25 20:01:04 +03:00
|
|
|
#include "tctx.h"
|
2022-05-25 18:13:39 +03:00
|
|
|
#include "sqpoll.h"
|
2022-05-25 19:40:19 +03:00
|
|
|
#include "fdinfo.h"
|
2022-05-24 19:56:14 +03:00
|
|
|
|
2022-05-24 20:46:43 +03:00
|
|
|
#include "xattr.h"
|
2022-05-24 20:56:42 +03:00
|
|
|
#include "nop.h"
|
2022-05-25 06:13:00 +03:00
|
|
|
#include "fs.h"
|
2022-05-25 06:19:47 +03:00
|
|
|
#include "splice.h"
|
2022-05-25 06:25:19 +03:00
|
|
|
#include "sync.h"
|
2022-05-25 06:28:33 +03:00
|
|
|
#include "advise.h"
|
2022-05-25 06:54:43 +03:00
|
|
|
#include "openclose.h"
|
2022-05-25 14:59:19 +03:00
|
|
|
#include "uring_cmd.h"
|
2022-05-25 15:09:18 +03:00
|
|
|
#include "epoll.h"
|
2022-05-25 15:12:18 +03:00
|
|
|
#include "statx.h"
|
2022-05-25 15:25:13 +03:00
|
|
|
#include "net.h"
|
2022-05-25 15:42:08 +03:00
|
|
|
#include "msg_ring.h"
|
2022-05-25 17:57:27 +03:00
|
|
|
#include "timeout.h"
|
2022-05-26 05:31:09 +03:00
|
|
|
#include "poll.h"
|
2022-05-26 05:36:47 +03:00
|
|
|
#include "cancel.h"
|
2022-05-24 20:46:43 +03:00
|
|
|
|
2019-09-15 00:23:45 +03:00
|
|
|
#define IORING_MAX_ENTRIES 32768
|
2019-10-04 21:10:03 +03:00
|
|
|
#define IORING_MAX_CQ_ENTRIES (2 * IORING_MAX_ENTRIES)
|
2019-10-26 16:20:21 +03:00
|
|
|
|
2021-08-19 08:56:57 +03:00
|
|
|
/* only define max */
|
2022-05-09 18:11:01 +03:00
|
|
|
#define IORING_MAX_FIXED_FILES (1U << 20)
|
2020-08-27 17:58:30 +03:00
|
|
|
#define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \
|
|
|
|
IORING_REGISTER_LAST + IORING_OP_LAST)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2021-08-19 08:56:57 +03:00
|
|
|
#define IO_RSRC_TAG_TABLE_SHIFT (PAGE_SHIFT - 3)
|
2021-06-14 04:36:21 +03:00
|
|
|
#define IO_RSRC_TAG_TABLE_MAX (1U << IO_RSRC_TAG_TABLE_SHIFT)
|
|
|
|
#define IO_RSRC_TAG_TABLE_MASK (IO_RSRC_TAG_TABLE_MAX - 1)
|
|
|
|
|
2021-05-14 14:06:44 +03:00
|
|
|
#define IORING_MAX_REG_BUFFERS (1U << 14)
|
|
|
|
|
2021-09-15 14:03:38 +03:00
|
|
|
#define SQE_COMMON_FLAGS (IOSQE_FIXED_FILE | IOSQE_IO_LINK | \
|
|
|
|
IOSQE_IO_HARDLINK | IOSQE_ASYNC)
|
|
|
|
|
2021-11-10 18:49:34 +03:00
|
|
|
#define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
|
|
|
|
IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
|
2021-09-15 14:03:38 +03:00
|
|
|
|
2021-06-17 20:14:04 +03:00
|
|
|
#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
|
2022-06-02 08:57:02 +03:00
|
|
|
REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
|
|
|
|
REQ_F_ASYNC_DATA)
|
2021-02-18 21:29:40 +03:00
|
|
|
|
2022-03-22 01:02:22 +03:00
|
|
|
#define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
|
|
|
|
IO_REQ_CLEAN_FLAGS)
|
|
|
|
|
2021-06-14 04:36:22 +03:00
|
|
|
#define IO_TCTX_REFS_CACHE_NR (1U << 10)
|
|
|
|
|
2021-01-15 20:37:44 +03:00
|
|
|
struct io_rsrc_put {
|
|
|
|
struct list_head list;
|
2021-04-25 16:32:18 +03:00
|
|
|
u64 tag;
|
2021-01-15 20:37:45 +03:00
|
|
|
union {
|
|
|
|
void *rsrc;
|
|
|
|
struct file *file;
|
2021-04-25 16:32:25 +03:00
|
|
|
struct io_mapped_ubuf *buf;
|
2021-01-15 20:37:45 +03:00
|
|
|
};
|
2021-01-15 20:37:44 +03:00
|
|
|
};
|
|
|
|
|
2021-04-01 17:43:40 +03:00
|
|
|
struct io_rsrc_node {
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
struct percpu_ref refs;
|
|
|
|
struct list_head node;
|
2021-01-15 20:37:44 +03:00
|
|
|
struct list_head rsrc_list;
|
2021-04-01 17:43:40 +03:00
|
|
|
struct io_rsrc_data *rsrc_data;
|
2020-05-15 02:21:15 +03:00
|
|
|
struct llist_node llist;
|
2020-11-18 17:56:26 +03:00
|
|
|
bool done;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
};
|
|
|
|
|
2021-04-01 17:43:44 +03:00
|
|
|
typedef void (rsrc_put_fn)(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc);
|
|
|
|
|
2021-04-01 17:43:40 +03:00
|
|
|
struct io_rsrc_data {
|
2019-12-09 21:22:50 +03:00
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
|
2021-06-14 04:36:21 +03:00
|
|
|
u64 **tags;
|
|
|
|
unsigned int nr;
|
2021-04-01 17:43:44 +03:00
|
|
|
rsrc_put_fn *do_put;
|
2021-04-11 03:46:34 +03:00
|
|
|
atomic_t refs;
|
2019-12-09 21:22:50 +03:00
|
|
|
struct completion done;
|
2021-02-19 12:19:36 +03:00
|
|
|
bool quiesce;
|
2019-12-09 21:22:50 +03:00
|
|
|
};
|
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
#define IO_BUFFER_LIST_BUF_PER_PAGE (PAGE_SIZE / sizeof(struct io_uring_buf))
|
2022-03-18 02:20:10 +03:00
|
|
|
struct io_buffer_list {
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
/*
|
|
|
|
* If ->buf_nr_pages is set, then buf_pages/buf_ring are used. If not,
|
|
|
|
* then these are classic provided buffers and ->buf_list is used.
|
|
|
|
*/
|
|
|
|
union {
|
|
|
|
struct list_head buf_list;
|
|
|
|
struct {
|
|
|
|
struct page **buf_pages;
|
|
|
|
struct io_uring_buf_ring *buf_ring;
|
|
|
|
};
|
|
|
|
};
|
2022-03-18 02:20:10 +03:00
|
|
|
__u16 bgid;
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
|
|
|
|
/* below is for ring provided buffers */
|
|
|
|
__u16 buf_nr_pages;
|
|
|
|
__u16 nr_entries;
|
2022-06-13 13:11:56 +03:00
|
|
|
__u16 head;
|
|
|
|
__u16 mask;
|
2022-03-18 02:20:10 +03:00
|
|
|
};
|
|
|
|
|
2020-02-24 02:23:11 +03:00
|
|
|
struct io_buffer {
|
|
|
|
struct list_head list;
|
|
|
|
__u64 addr;
|
2021-05-05 15:47:06 +03:00
|
|
|
__u32 len;
|
2020-02-24 02:23:11 +03:00
|
|
|
__u16 bid;
|
2022-03-09 21:27:52 +03:00
|
|
|
__u16 bgid;
|
2020-02-24 02:23:11 +03:00
|
|
|
};
|
|
|
|
|
2021-02-10 03:03:13 +03:00
|
|
|
#define IO_COMPL_BATCH 32
|
2021-02-10 03:03:18 +03:00
|
|
|
#define IO_REQ_CACHE_SIZE 32
|
2021-02-10 03:03:17 +03:00
|
|
|
#define IO_REQ_ALLOC_BATCH 8
|
2021-02-10 03:03:10 +03:00
|
|
|
|
2022-05-24 19:56:14 +03:00
|
|
|
#define BGID_ARRAY 64
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2019-03-13 21:39:28 +03:00
|
|
|
/*
|
|
|
|
* First field must be the file pointer in all the
|
|
|
|
* iocb unions! See also 'struct kiocb' in <linux/fs.h>
|
|
|
|
*/
|
2019-12-20 18:45:55 +03:00
|
|
|
struct io_rw {
|
|
|
|
/* NOTE: kiocb has the file as the first member, so don't do it here */
|
|
|
|
struct kiocb kiocb;
|
|
|
|
u64 addr;
|
2022-03-29 19:48:05 +03:00
|
|
|
u32 len;
|
2022-05-18 11:40:00 +03:00
|
|
|
rwf_t flags;
|
2019-12-20 18:45:55 +03:00
|
|
|
};
|
|
|
|
|
2021-01-15 20:37:44 +03:00
|
|
|
struct io_rsrc_update {
|
2019-12-09 21:22:50 +03:00
|
|
|
struct file *file;
|
|
|
|
u64 arg;
|
|
|
|
u32 nr_args;
|
|
|
|
u32 offset;
|
|
|
|
};
|
|
|
|
|
2020-02-24 02:41:33 +03:00
|
|
|
struct io_provide_buf {
|
|
|
|
struct file *file;
|
|
|
|
__u64 addr;
|
2021-04-15 15:07:39 +03:00
|
|
|
__u32 len;
|
2020-02-24 02:41:33 +03:00
|
|
|
__u32 bgid;
|
|
|
|
__u16 nbufs;
|
|
|
|
__u16 bid;
|
|
|
|
};
|
|
|
|
|
2021-10-14 18:10:15 +03:00
|
|
|
struct io_rw_state {
|
2020-08-13 18:47:43 +03:00
|
|
|
struct iov_iter iter;
|
2021-09-10 20:19:14 +03:00
|
|
|
struct iov_iter_state iter_state;
|
2021-10-14 18:10:16 +03:00
|
|
|
struct iovec fast_iov[UIO_FASTIOV];
|
2021-10-14 18:10:15 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
struct io_async_rw {
|
|
|
|
struct io_rw_state s;
|
|
|
|
const struct iovec *free_iovec;
|
2020-08-13 20:51:40 +03:00
|
|
|
size_t bytes_done;
|
2020-05-22 18:24:42 +03:00
|
|
|
struct wait_page_queue wpq;
|
2019-12-02 21:03:47 +03:00
|
|
|
};
|
|
|
|
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 18:37:37 +03:00
|
|
|
enum {
|
|
|
|
IORING_RSRC_FILE = 0,
|
|
|
|
IORING_RSRC_BUFFER = 1,
|
|
|
|
};
|
|
|
|
|
2022-04-21 12:13:43 +03:00
|
|
|
enum {
|
|
|
|
IO_CHECK_CQ_OVERFLOW_BIT,
|
2022-04-21 12:13:44 +03:00
|
|
|
IO_CHECK_CQ_DROPPED_BIT,
|
2022-04-21 12:13:43 +03:00
|
|
|
};
|
|
|
|
|
2020-07-13 23:37:14 +03:00
|
|
|
struct io_defer_entry {
|
|
|
|
struct list_head list;
|
|
|
|
struct io_kiocb *req;
|
2020-07-13 23:37:15 +03:00
|
|
|
u32 seq;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
};
|
|
|
|
|
2021-08-15 12:40:25 +03:00
|
|
|
/* requests with any of those set should undergo io_disarm_next() */
|
|
|
|
#define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
|
2022-04-16 00:08:29 +03:00
|
|
|
#define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK)
|
2021-08-15 12:40:25 +03:00
|
|
|
|
2021-02-04 16:51:56 +03:00
|
|
|
static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
|
|
|
struct task_struct *task,
|
2021-05-17 00:58:04 +03:00
|
|
|
bool cancel_all);
|
2020-12-31 00:34:15 +03:00
|
|
|
|
2021-02-10 05:53:37 +03:00
|
|
|
static void io_dismantle_req(struct io_kiocb *req);
|
2021-04-25 16:32:20 +03:00
|
|
|
static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
|
2021-04-25 16:32:22 +03:00
|
|
|
struct io_uring_rsrc_update2 *up,
|
2021-04-25 16:32:19 +03:00
|
|
|
unsigned nr_args);
|
2021-03-19 20:22:41 +03:00
|
|
|
static void io_clean_op(struct io_kiocb *req);
|
2022-04-16 00:08:26 +03:00
|
|
|
static void io_queue_sqe(struct io_kiocb *req);
|
2021-01-15 20:37:44 +03:00
|
|
|
static void io_rsrc_put_work(struct work_struct *work);
|
2019-04-07 06:51:27 +03:00
|
|
|
|
2021-01-27 02:35:10 +03:00
|
|
|
static void io_req_task_queue(struct io_kiocb *req);
|
2021-09-08 18:40:52 +03:00
|
|
|
static void __io_submit_flush_completions(struct io_ring_ctx *ctx);
|
2021-03-01 01:35:20 +03:00
|
|
|
static int io_req_prep_async(struct io_kiocb *req);
|
2019-04-07 06:51:27 +03:00
|
|
|
|
2022-03-17 05:03:42 +03:00
|
|
|
static void io_eventfd_signal(struct io_ring_ctx *ctx);
|
2021-08-25 14:25:45 +03:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
static struct kmem_cache *req_cachep;
|
|
|
|
|
2022-04-26 11:29:05 +03:00
|
|
|
const char *io_uring_get_opcode(u8 opcode)
|
|
|
|
{
|
2022-05-25 20:57:03 +03:00
|
|
|
if (opcode < IORING_OP_LAST)
|
|
|
|
return io_op_defs[opcode].name;
|
2022-04-26 11:29:05 +03:00
|
|
|
return "INVALID";
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
struct sock *io_uring_get_socket(struct file *file)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_UNIX)
|
2022-05-25 06:54:43 +03:00
|
|
|
if (io_is_uring_fops(file)) {
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
return ctx->ring_sock->sk;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(io_uring_get_socket);
|
|
|
|
|
2022-04-06 23:33:56 +03:00
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
static inline bool io_file_need_scm(struct file *filp)
|
|
|
|
{
|
2022-04-21 01:15:27 +03:00
|
|
|
#if defined(IO_URING_SCM_ALL)
|
|
|
|
return true;
|
|
|
|
#else
|
2022-04-06 23:33:56 +03:00
|
|
|
return !!unix_get_socket(filp);
|
2022-04-21 01:15:27 +03:00
|
|
|
#endif
|
2022-04-06 23:33:56 +03:00
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline bool io_file_need_scm(struct file *filp)
|
|
|
|
{
|
2022-04-21 01:15:27 +03:00
|
|
|
return false;
|
2022-04-06 23:33:56 +03:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2021-08-18 14:42:46 +03:00
|
|
|
static inline void io_tw_lock(struct io_ring_ctx *ctx, bool *locked)
|
|
|
|
{
|
|
|
|
if (!*locked) {
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
*locked = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-08 18:40:52 +03:00
|
|
|
static inline void io_submit_flush_completions(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-09-24 23:59:44 +03:00
|
|
|
if (!wq_list_empty(&ctx->submit_state.compl_reqs))
|
2021-09-08 18:40:52 +03:00
|
|
|
__io_submit_flush_completions(ctx);
|
|
|
|
}
|
|
|
|
|
2021-10-10 01:14:41 +03:00
|
|
|
#define IO_RSRC_REF_BATCH 100
|
|
|
|
|
2022-04-18 22:51:14 +03:00
|
|
|
static void io_rsrc_put_node(struct io_rsrc_node *node, int nr)
|
|
|
|
{
|
|
|
|
percpu_ref_put_many(&node->refs, nr);
|
|
|
|
}
|
|
|
|
|
2021-10-10 01:14:41 +03:00
|
|
|
static inline void io_req_put_rsrc_locked(struct io_kiocb *req,
|
|
|
|
struct io_ring_ctx *ctx)
|
|
|
|
__must_hold(&ctx->uring_lock)
|
2020-11-18 22:57:26 +03:00
|
|
|
{
|
2022-04-18 22:51:13 +03:00
|
|
|
struct io_rsrc_node *node = req->rsrc_node;
|
2021-10-10 01:14:41 +03:00
|
|
|
|
2022-04-18 22:51:13 +03:00
|
|
|
if (node) {
|
|
|
|
if (node == ctx->rsrc_node)
|
2021-10-10 01:14:41 +03:00
|
|
|
ctx->rsrc_cached_refs++;
|
|
|
|
else
|
2022-04-18 22:51:14 +03:00
|
|
|
io_rsrc_put_node(node, 1);
|
2021-10-10 01:14:41 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-04-18 22:51:15 +03:00
|
|
|
static inline void io_req_put_rsrc(struct io_kiocb *req)
|
2021-10-10 01:14:41 +03:00
|
|
|
{
|
2022-04-18 22:51:13 +03:00
|
|
|
if (req->rsrc_node)
|
2022-04-18 22:51:14 +03:00
|
|
|
io_rsrc_put_node(req->rsrc_node, 1);
|
2021-10-10 01:14:41 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static __cold void io_rsrc_refs_drop(struct io_ring_ctx *ctx)
|
|
|
|
__must_hold(&ctx->uring_lock)
|
|
|
|
{
|
|
|
|
if (ctx->rsrc_cached_refs) {
|
2022-04-18 22:51:14 +03:00
|
|
|
io_rsrc_put_node(ctx->rsrc_node, ctx->rsrc_cached_refs);
|
2021-10-10 01:14:41 +03:00
|
|
|
ctx->rsrc_cached_refs = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_rsrc_refs_refill(struct io_ring_ctx *ctx)
|
|
|
|
__must_hold(&ctx->uring_lock)
|
|
|
|
{
|
|
|
|
ctx->rsrc_cached_refs += IO_RSRC_REF_BATCH;
|
|
|
|
percpu_ref_get_many(&ctx->rsrc_node->refs, IO_RSRC_REF_BATCH);
|
|
|
|
}
|
2020-11-18 22:57:26 +03:00
|
|
|
|
2021-10-10 01:14:40 +03:00
|
|
|
static inline void io_req_set_rsrc_node(struct io_kiocb *req,
|
2022-04-05 02:18:43 +03:00
|
|
|
struct io_ring_ctx *ctx,
|
|
|
|
unsigned int issue_flags)
|
2020-11-18 22:57:26 +03:00
|
|
|
{
|
2022-04-18 22:51:13 +03:00
|
|
|
if (!req->rsrc_node) {
|
|
|
|
req->rsrc_node = ctx->rsrc_node;
|
2022-04-05 02:18:43 +03:00
|
|
|
|
|
|
|
if (!(issue_flags & IO_URING_F_UNLOCKED)) {
|
|
|
|
lockdep_assert_held(&ctx->uring_lock);
|
|
|
|
ctx->rsrc_cached_refs--;
|
|
|
|
if (unlikely(ctx->rsrc_cached_refs < 0))
|
|
|
|
io_rsrc_refs_refill(ctx);
|
|
|
|
} else {
|
2022-04-18 22:51:13 +03:00
|
|
|
percpu_ref_get(&req->rsrc_node->refs);
|
2022-04-05 02:18:43 +03:00
|
|
|
}
|
2020-11-18 22:57:26 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-03-09 03:46:52 +03:00
|
|
|
static unsigned int __io_put_kbuf(struct io_kiocb *req, struct list_head *list)
|
2021-12-05 17:37:57 +03:00
|
|
|
{
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (req->flags & REQ_F_BUFFER_RING) {
|
|
|
|
if (req->buf_list)
|
|
|
|
req->buf_list->head++;
|
|
|
|
req->flags &= ~REQ_F_BUFFER_RING;
|
|
|
|
} else {
|
|
|
|
list_add(&req->kbuf->list, list);
|
|
|
|
req->flags &= ~REQ_F_BUFFER_SELECTED;
|
|
|
|
}
|
2021-12-05 17:37:57 +03:00
|
|
|
|
2022-05-01 21:08:35 +03:00
|
|
|
return IORING_CQE_F_BUFFER | (req->buf_index << IORING_CQE_BUFFER_SHIFT);
|
2021-12-05 17:37:57 +03:00
|
|
|
}
|
|
|
|
|
2022-03-09 03:46:52 +03:00
|
|
|
static inline unsigned int io_put_kbuf_comp(struct io_kiocb *req)
|
2021-12-05 17:37:57 +03:00
|
|
|
{
|
2022-03-25 16:00:43 +03:00
|
|
|
lockdep_assert_held(&req->ctx->completion_lock);
|
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
|
2021-12-05 17:37:57 +03:00
|
|
|
return 0;
|
2022-03-09 03:46:52 +03:00
|
|
|
return __io_put_kbuf(req, &req->ctx->io_buffers_comp);
|
|
|
|
}
|
|
|
|
|
2022-05-25 15:25:13 +03:00
|
|
|
inline unsigned int io_put_kbuf(struct io_kiocb *req, unsigned issue_flags)
|
2022-03-09 03:46:52 +03:00
|
|
|
{
|
|
|
|
unsigned int cflags;
|
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (!(req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)))
|
2022-03-09 03:46:52 +03:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can add this buffer back to two lists:
|
|
|
|
*
|
|
|
|
* 1) The io_buffers_cache list. This one is protected by the
|
|
|
|
* ctx->uring_lock. If we already hold this lock, add back to this
|
|
|
|
* list as we can grab it from issue as well.
|
|
|
|
* 2) The io_buffers_comp list. This one is protected by the
|
|
|
|
* ctx->completion_lock.
|
|
|
|
*
|
|
|
|
* We migrate buffers from the comp_list to the issue cache list
|
|
|
|
* when we need one.
|
|
|
|
*/
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (req->flags & REQ_F_BUFFER_RING) {
|
|
|
|
/* no buffers to recycle for this case */
|
|
|
|
cflags = __io_put_kbuf(req, NULL);
|
|
|
|
} else if (issue_flags & IO_URING_F_UNLOCKED) {
|
2022-03-09 03:46:52 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
cflags = __io_put_kbuf(req, &ctx->io_buffers_comp);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
} else {
|
2022-03-25 16:00:42 +03:00
|
|
|
lockdep_assert_held(&req->ctx->uring_lock);
|
|
|
|
|
2022-03-09 03:46:52 +03:00
|
|
|
cflags = __io_put_kbuf(req, &req->ctx->io_buffers_cache);
|
|
|
|
}
|
|
|
|
|
|
|
|
return cflags;
|
2021-12-05 17:37:57 +03:00
|
|
|
}
|
|
|
|
|
2022-03-18 02:20:10 +03:00
|
|
|
static struct io_buffer_list *io_buffer_get_list(struct io_ring_ctx *ctx,
|
|
|
|
unsigned int bgid)
|
|
|
|
{
|
2022-05-01 19:52:44 +03:00
|
|
|
if (ctx->io_bl && bgid < BGID_ARRAY)
|
|
|
|
return &ctx->io_bl[bgid];
|
2022-03-18 02:20:10 +03:00
|
|
|
|
2022-05-01 19:52:44 +03:00
|
|
|
return xa_load(&ctx->io_bl_xa, bgid);
|
2022-03-18 02:20:10 +03:00
|
|
|
}
|
|
|
|
|
2022-05-26 05:31:09 +03:00
|
|
|
void __io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
|
2022-03-09 21:27:52 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2022-03-18 02:20:10 +03:00
|
|
|
struct io_buffer_list *bl;
|
|
|
|
struct io_buffer *buf;
|
2022-03-09 21:27:52 +03:00
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
/*
|
|
|
|
* We don't need to recycle for REQ_F_BUFFER_RING, we can just clear
|
|
|
|
* the flag and hence ensure that bl->head doesn't get incremented.
|
|
|
|
* If the tail has already been incremented, hang on to it.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_BUFFER_RING) {
|
|
|
|
if (req->buf_list) {
|
2022-06-11 15:29:52 +03:00
|
|
|
if (req->flags & REQ_F_PARTIAL_IO) {
|
|
|
|
req->buf_list->head++;
|
|
|
|
req->buf_list = NULL;
|
|
|
|
} else {
|
|
|
|
req->buf_index = req->buf_list->bgid;
|
|
|
|
req->flags &= ~REQ_F_BUFFER_RING;
|
|
|
|
}
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
2022-03-09 21:27:52 +03:00
|
|
|
|
2022-03-25 14:52:14 +03:00
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
2022-03-09 21:27:52 +03:00
|
|
|
|
|
|
|
buf = req->kbuf;
|
2022-03-18 02:20:10 +03:00
|
|
|
bl = io_buffer_get_list(ctx, buf->bgid);
|
|
|
|
list_add(&buf->list, &bl->buf_list);
|
2022-03-09 21:27:52 +03:00
|
|
|
req->flags &= ~REQ_F_BUFFER_SELECTED;
|
2022-05-01 21:08:35 +03:00
|
|
|
req->buf_index = buf->bgid;
|
2022-03-22 23:12:33 +03:00
|
|
|
|
2022-03-25 14:52:14 +03:00
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
2022-03-09 21:27:52 +03:00
|
|
|
}
|
|
|
|
|
2022-06-02 08:57:02 +03:00
|
|
|
static bool io_match_linked(struct io_kiocb *head)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req;
|
|
|
|
|
|
|
|
io_for_each_link(req, head) {
|
|
|
|
if (req->flags & REQ_F_INFLIGHT)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
2021-11-26 17:38:15 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* As io_match_task() but protected against racing with linked timeouts.
|
|
|
|
* User must not hold timeout_lock.
|
|
|
|
*/
|
2022-05-26 05:31:09 +03:00
|
|
|
bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2021-11-26 17:38:15 +03:00
|
|
|
{
|
2022-06-02 08:57:02 +03:00
|
|
|
bool matched;
|
|
|
|
|
2021-11-26 17:38:15 +03:00
|
|
|
if (task && head->task != task)
|
|
|
|
return false;
|
2022-06-02 08:57:02 +03:00
|
|
|
if (cancel_all)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (head->flags & REQ_F_LINK_TIMEOUT) {
|
|
|
|
struct io_ring_ctx *ctx = head->ctx;
|
|
|
|
|
|
|
|
/* protect against races with linked timeouts */
|
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
|
|
|
matched = io_match_linked(head);
|
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
|
|
|
} else {
|
|
|
|
matched = io_match_linked(head);
|
|
|
|
}
|
|
|
|
return matched;
|
2021-11-26 17:38:15 +03:00
|
|
|
}
|
|
|
|
|
2021-08-27 12:46:09 +03:00
|
|
|
static inline void req_fail_link_node(struct io_kiocb *req, int res)
|
|
|
|
{
|
|
|
|
req_set_fail(req);
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, res, 0);
|
2021-08-27 12:46:09 +03:00
|
|
|
}
|
|
|
|
|
2022-04-12 17:09:48 +03:00
|
|
|
static inline void io_req_add_to_cache(struct io_kiocb *req, struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
wq_stack_add_head(&req->comp_list, &ctx->submit_state.free_list);
|
2021-08-27 12:46:09 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_ring_ctx_ref_free(struct percpu_ref *ref)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
|
|
|
|
|
2020-05-15 02:18:39 +03:00
|
|
|
complete(&ctx->ref_comp);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_fallback_req_func(struct work_struct *work)
|
2021-08-09 22:18:07 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
|
|
|
|
fallback_work.work);
|
|
|
|
struct llist_node *node = llist_del_all(&ctx->fallback_llist);
|
|
|
|
struct io_kiocb *req, *tmp;
|
2021-08-18 14:42:46 +03:00
|
|
|
bool locked = false;
|
2021-08-09 22:18:07 +03:00
|
|
|
|
|
|
|
percpu_ref_get(&ctx->refs);
|
|
|
|
llist_for_each_entry_safe(req, tmp, node, io_task_work.fallback_node)
|
2021-08-18 14:42:46 +03:00
|
|
|
req->io_task_work.func(req, &locked);
|
2021-08-18 14:42:45 +03:00
|
|
|
|
2021-08-18 14:42:46 +03:00
|
|
|
if (locked) {
|
2021-09-08 18:40:52 +03:00
|
|
|
io_submit_flush_completions(ctx);
|
2021-08-18 14:42:46 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
2021-08-09 22:18:07 +03:00
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2022-05-01 19:52:44 +03:00
|
|
|
int hash_bits;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
|
|
|
|
if (!ctx)
|
|
|
|
return NULL;
|
|
|
|
|
2022-05-01 19:52:44 +03:00
|
|
|
xa_init(&ctx->io_bl_xa);
|
|
|
|
|
2019-12-05 05:56:40 +03:00
|
|
|
/*
|
|
|
|
* Use 5 bits less than the max cq entries, that should give us around
|
|
|
|
* 32 entries per hash list if totally full and uniformly spread.
|
|
|
|
*/
|
|
|
|
hash_bits = ilog2(p->cq_entries);
|
|
|
|
hash_bits -= 5;
|
|
|
|
if (hash_bits <= 0)
|
|
|
|
hash_bits = 1;
|
|
|
|
ctx->cancel_hash_bits = hash_bits;
|
|
|
|
ctx->cancel_hash = kmalloc((1U << hash_bits) * sizeof(struct hlist_head),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!ctx->cancel_hash)
|
|
|
|
goto err;
|
|
|
|
__hash_init(ctx->cancel_hash, 1U << hash_bits);
|
|
|
|
|
2021-04-28 15:11:29 +03:00
|
|
|
ctx->dummy_ubuf = kzalloc(sizeof(*ctx->dummy_ubuf), GFP_KERNEL);
|
|
|
|
if (!ctx->dummy_ubuf)
|
|
|
|
goto err;
|
|
|
|
/* set invalid range, so io_import_fixed() fails meeting it */
|
|
|
|
ctx->dummy_ubuf->ubuf = -1UL;
|
|
|
|
|
2019-05-07 20:01:48 +03:00
|
|
|
if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
|
2019-11-08 04:27:42 +03:00
|
|
|
PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
|
|
|
|
goto err;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
ctx->flags = p->flags;
|
2020-09-03 21:12:41 +03:00
|
|
|
init_waitqueue_head(&ctx->sqo_sq_wait);
|
2020-09-14 20:16:23 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->sqd_list);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->cq_overflow_list);
|
2022-03-09 03:46:52 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->io_buffers_cache);
|
2022-03-15 19:54:08 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->apoll_cache);
|
2020-05-15 02:18:39 +03:00
|
|
|
init_completion(&ctx->ref_comp);
|
2021-03-08 17:16:16 +03:00
|
|
|
xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
mutex_init(&ctx->uring_lock);
|
2021-06-15 01:37:28 +03:00
|
|
|
init_waitqueue_head(&ctx->cq_wait);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
spin_lock_init(&ctx->completion_lock);
|
2021-08-11 00:11:51 +03:00
|
|
|
spin_lock_init(&ctx->timeout_lock);
|
2021-09-24 23:59:49 +03:00
|
|
|
INIT_WQ_LIST(&ctx->iopoll_list);
|
2022-03-09 03:46:52 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->io_buffers_pages);
|
|
|
|
INIT_LIST_HEAD(&ctx->io_buffers_comp);
|
2019-04-07 06:51:27 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->defer_list);
|
2019-09-17 21:26:57 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->timeout_list);
|
2021-08-29 04:54:38 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->ltimeout_list);
|
2021-01-15 20:37:46 +03:00
|
|
|
spin_lock_init(&ctx->rsrc_ref_lock);
|
|
|
|
INIT_LIST_HEAD(&ctx->rsrc_ref_list);
|
2021-01-15 20:37:44 +03:00
|
|
|
INIT_DELAYED_WORK(&ctx->rsrc_put_work, io_rsrc_put_work);
|
|
|
|
init_llist_head(&ctx->rsrc_put_llist);
|
2021-03-06 14:02:12 +03:00
|
|
|
INIT_LIST_HEAD(&ctx->tctx_list);
|
2021-09-24 23:59:47 +03:00
|
|
|
ctx->submit_state.free_list.next = NULL;
|
|
|
|
INIT_WQ_LIST(&ctx->locked_free_list);
|
2021-06-30 23:54:03 +03:00
|
|
|
INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func);
|
2021-09-24 23:59:44 +03:00
|
|
|
INIT_WQ_LIST(&ctx->submit_state.compl_reqs);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return ctx;
|
2019-11-08 04:27:42 +03:00
|
|
|
err:
|
2021-04-28 15:11:29 +03:00
|
|
|
kfree(ctx->dummy_ubuf);
|
2019-12-05 05:56:40 +03:00
|
|
|
kfree(ctx->cancel_hash);
|
2022-05-01 19:52:44 +03:00
|
|
|
kfree(ctx->io_bl);
|
|
|
|
xa_destroy(&ctx->io_bl_xa);
|
2019-11-08 04:27:42 +03:00
|
|
|
kfree(ctx);
|
|
|
|
return NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-05-17 00:58:10 +03:00
|
|
|
static void io_account_cq_overflow(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_rings *r = ctx->rings;
|
|
|
|
|
|
|
|
WRITE_ONCE(r->cq_overflow, READ_ONCE(r->cq_overflow) + 1);
|
|
|
|
ctx->cq_extra--;
|
|
|
|
}
|
|
|
|
|
2020-07-13 23:37:15 +03:00
|
|
|
static bool req_need_defer(struct io_kiocb *req, u32 seq)
|
2019-10-11 06:42:58 +03:00
|
|
|
{
|
2020-07-09 18:43:27 +03:00
|
|
|
if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-11-08 18:09:12 +03:00
|
|
|
|
2021-05-17 00:58:10 +03:00
|
|
|
return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
|
2020-07-09 18:43:27 +03:00
|
|
|
}
|
2019-04-07 06:51:27 +03:00
|
|
|
|
2019-11-13 13:06:25 +03:00
|
|
|
return false;
|
2019-04-07 06:51:27 +03:00
|
|
|
}
|
|
|
|
|
2021-08-09 15:04:04 +03:00
|
|
|
static inline bool io_req_ffs_set(struct io_kiocb *req)
|
|
|
|
{
|
2021-10-17 02:07:09 +03:00
|
|
|
return req->flags & REQ_F_FIXED_FILE;
|
2021-08-09 15:04:04 +03:00
|
|
|
}
|
|
|
|
|
2022-06-02 08:57:02 +03:00
|
|
|
static inline void io_req_track_inflight(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (!(req->flags & REQ_F_INFLIGHT)) {
|
|
|
|
req->flags |= REQ_F_INFLIGHT;
|
2022-06-23 20:06:43 +03:00
|
|
|
atomic_inc(&req->task->io_uring->inflight_tracked);
|
2022-06-02 08:57:02 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-11 21:28:31 +03:00
|
|
|
static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-15 12:40:26 +03:00
|
|
|
if (WARN_ON_ONCE(!req->link))
|
|
|
|
return NULL;
|
|
|
|
|
2021-08-15 12:40:24 +03:00
|
|
|
req->flags &= ~REQ_F_ARM_LTIMEOUT;
|
|
|
|
req->flags |= REQ_F_LINK_TIMEOUT;
|
2021-08-11 21:28:31 +03:00
|
|
|
|
|
|
|
/* linked timeouts should have two refs once prep'ed */
|
2021-08-15 12:40:18 +03:00
|
|
|
io_req_set_refcount(req);
|
2021-08-15 12:40:24 +03:00
|
|
|
__io_req_set_refcount(req->link, 2);
|
|
|
|
return req->link;
|
2021-08-11 21:28:31 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-15 12:40:24 +03:00
|
|
|
if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT)))
|
2021-08-11 21:28:31 +03:00
|
|
|
return NULL;
|
|
|
|
return __io_prep_linked_timeout(req);
|
|
|
|
}
|
|
|
|
|
2022-04-16 00:08:25 +03:00
|
|
|
static noinline void __io_arm_ltimeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
io_queue_linked_timeout(__io_prep_linked_timeout(req));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void io_arm_ltimeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (unlikely(req->flags & REQ_F_ARM_LTIMEOUT))
|
|
|
|
__io_arm_ltimeout(req);
|
|
|
|
}
|
|
|
|
|
2020-10-15 17:46:24 +03:00
|
|
|
static void io_prep_async_work(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
const struct io_op_def *def = &io_op_defs[req->opcode];
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-06-17 20:14:02 +03:00
|
|
|
if (!(req->flags & REQ_F_CREDS)) {
|
|
|
|
req->flags |= REQ_F_CREDS;
|
2021-06-17 20:14:01 +03:00
|
|
|
req->creds = get_current_cred();
|
2021-06-17 20:14:02 +03:00
|
|
|
}
|
2021-03-06 19:22:27 +03:00
|
|
|
|
2021-03-22 04:58:29 +03:00
|
|
|
req->work.list.next = NULL;
|
|
|
|
req->work.flags = 0;
|
2022-04-18 19:44:00 +03:00
|
|
|
req->work.cancel_seq = atomic_read(&ctx->cancel_seq);
|
2020-10-22 18:47:16 +03:00
|
|
|
if (req->flags & REQ_F_FORCE_ASYNC)
|
|
|
|
req->work.flags |= IO_WQ_WORK_CONCURRENT;
|
|
|
|
|
2020-10-15 17:46:24 +03:00
|
|
|
if (req->flags & REQ_F_ISREG) {
|
|
|
|
if (def->hash_reg_file || (ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
io_wq_hash_work(&req->work, file_inode(req->file));
|
2021-04-01 17:38:34 +03:00
|
|
|
} else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
|
2020-10-15 17:46:24 +03:00
|
|
|
if (def->unbound_nonreg_file)
|
|
|
|
req->work.flags |= IO_WQ_WORK_UNBOUND;
|
|
|
|
}
|
2019-10-24 16:25:42 +03:00
|
|
|
}
|
2020-01-28 02:34:48 +03:00
|
|
|
|
2020-06-29 19:18:43 +03:00
|
|
|
static void io_prep_async_link(struct io_kiocb *req)
|
2019-10-24 16:25:42 +03:00
|
|
|
{
|
2020-06-29 19:18:43 +03:00
|
|
|
struct io_kiocb *cur;
|
2019-09-10 18:15:04 +03:00
|
|
|
|
2021-07-26 16:14:31 +03:00
|
|
|
if (req->flags & REQ_F_LINK_TIMEOUT) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-11-23 04:45:35 +03:00
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2021-07-26 16:14:31 +03:00
|
|
|
io_for_each_link(cur, req)
|
|
|
|
io_prep_async_work(cur);
|
2021-11-23 04:45:35 +03:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2021-07-26 16:14:31 +03:00
|
|
|
} else {
|
|
|
|
io_for_each_link(cur, req)
|
|
|
|
io_prep_async_work(cur);
|
|
|
|
}
|
2019-10-24 16:25:42 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:48 +03:00
|
|
|
static inline void io_req_add_compl_list(struct io_kiocb *req)
|
|
|
|
{
|
2022-03-25 14:52:17 +03:00
|
|
|
struct io_submit_state *state = &req->ctx->submit_state;
|
2021-10-04 22:02:48 +03:00
|
|
|
|
2021-11-10 18:49:33 +03:00
|
|
|
if (!(req->flags & REQ_F_CQE_SKIP))
|
2022-03-25 14:52:17 +03:00
|
|
|
state->flush_cqes = true;
|
2021-10-04 22:02:48 +03:00
|
|
|
wq_list_add_tail(&req->comp_list, &state->compl_reqs);
|
|
|
|
}
|
|
|
|
|
2022-04-16 00:08:27 +03:00
|
|
|
static void io_queue_iowq(struct io_kiocb *req, bool *dont_use)
|
2019-10-24 16:25:42 +03:00
|
|
|
{
|
2020-06-29 19:18:43 +03:00
|
|
|
struct io_kiocb *link = io_prep_linked_timeout(req);
|
2021-02-16 22:56:50 +03:00
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
2019-10-24 16:25:42 +03:00
|
|
|
|
2021-02-17 00:15:30 +03:00
|
|
|
BUG_ON(!tctx);
|
|
|
|
BUG_ON(!tctx->io_wq);
|
2019-10-24 16:25:42 +03:00
|
|
|
|
2020-06-29 19:18:43 +03:00
|
|
|
/* init ->work of the whole link before punting */
|
|
|
|
io_prep_async_link(req);
|
2021-07-23 20:53:54 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Not expected to happen, but if we do have a bug where this _can_
|
|
|
|
* happen, catch it here and ensure the request is marked as
|
|
|
|
* canceled. That will make io-wq go through the usual work cancel
|
|
|
|
* procedure rather than attempt to run this request (or create a new
|
|
|
|
* worker for it).
|
|
|
|
*/
|
|
|
|
if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
|
|
|
|
req->work.flags |= IO_WQ_WORK_CANCEL;
|
|
|
|
|
2022-04-16 00:08:22 +03:00
|
|
|
trace_io_uring_queue_async_work(req->ctx, req, req->cqe.user_data,
|
|
|
|
req->opcode, req->flags, &req->work,
|
|
|
|
io_wq_is_hashed(&req->work));
|
2021-03-01 21:20:47 +03:00
|
|
|
io_wq_enqueue(tctx->io_wq, &req->work);
|
2020-08-10 18:55:22 +03:00
|
|
|
if (link)
|
|
|
|
io_queue_linked_timeout(link);
|
2020-06-29 19:18:43 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
|
2019-04-07 06:51:27 +03:00
|
|
|
{
|
2021-06-15 01:37:31 +03:00
|
|
|
while (!list_empty(&ctx->defer_list)) {
|
2020-07-13 23:37:14 +03:00
|
|
|
struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
|
|
|
|
struct io_defer_entry, list);
|
2019-04-07 06:51:27 +03:00
|
|
|
|
2020-07-13 23:37:15 +03:00
|
|
|
if (req_need_defer(de->req, de->seq))
|
2020-05-26 20:34:05 +03:00
|
|
|
break;
|
2020-07-13 23:37:14 +03:00
|
|
|
list_del_init(&de->list);
|
2021-01-27 02:35:10 +03:00
|
|
|
io_req_task_queue(de->req);
|
2020-07-13 23:37:14 +03:00
|
|
|
kfree(de);
|
2021-06-15 01:37:31 +03:00
|
|
|
}
|
2020-05-26 20:34:05 +03:00
|
|
|
}
|
|
|
|
|
2022-03-17 05:03:42 +03:00
|
|
|
static void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
|
2020-05-30 14:54:17 +03:00
|
|
|
{
|
2022-03-17 05:03:42 +03:00
|
|
|
if (ctx->off_timeout_used || ctx->drain_active) {
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
if (ctx->off_timeout_used)
|
|
|
|
io_flush_timeouts(ctx);
|
|
|
|
if (ctx->drain_active)
|
|
|
|
io_queue_deferred(ctx);
|
|
|
|
io_commit_cqring(ctx);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
if (ctx->has_evfd)
|
|
|
|
io_eventfd_signal(ctx);
|
2019-04-07 06:51:27 +03:00
|
|
|
}
|
|
|
|
|
2021-01-19 16:32:39 +03:00
|
|
|
static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
|
|
|
|
}
|
|
|
|
|
2022-04-12 17:09:51 +03:00
|
|
|
/*
|
|
|
|
* writes to the cq entry need to come after reading head; the
|
|
|
|
* control dependency is enough as we're using WRITE_ONCE to
|
|
|
|
* fill the cq entry
|
|
|
|
*/
|
|
|
|
static noinline struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2019-08-26 20:23:46 +03:00
|
|
|
struct io_rings *rings = ctx->rings;
|
2022-04-12 17:09:51 +03:00
|
|
|
unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1);
|
2022-04-26 21:21:28 +03:00
|
|
|
unsigned int shift = 0;
|
2022-04-12 17:09:51 +03:00
|
|
|
unsigned int free, queued, len;
|
|
|
|
|
2022-04-26 21:21:28 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_CQE32)
|
|
|
|
shift = 1;
|
|
|
|
|
2022-04-12 17:09:51 +03:00
|
|
|
/* userspace may cheat modifying the tail, be safe and do min */
|
|
|
|
queued = min(__io_cqring_events(ctx), ctx->cq_entries);
|
|
|
|
free = ctx->cq_entries - queued;
|
|
|
|
/* we need a contiguous range, limit based on the current array offset */
|
|
|
|
len = min(free, ctx->cq_entries - off);
|
|
|
|
if (!len)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return NULL;
|
|
|
|
|
2022-04-12 17:09:51 +03:00
|
|
|
ctx->cached_cq_tail++;
|
|
|
|
ctx->cqe_cached = &rings->cqes[off];
|
|
|
|
ctx->cqe_sentinel = ctx->cqe_cached + len;
|
2022-04-26 21:21:28 +03:00
|
|
|
ctx->cqe_cached++;
|
|
|
|
return &rings->cqes[off << shift];
|
2022-04-12 17:09:51 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (likely(ctx->cqe_cached < ctx->cqe_sentinel)) {
|
2022-04-26 21:21:28 +03:00
|
|
|
struct io_uring_cqe *cqe = ctx->cqe_cached;
|
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
unsigned int off = ctx->cqe_cached - ctx->rings->cqes;
|
|
|
|
|
|
|
|
cqe += off;
|
|
|
|
}
|
|
|
|
|
2022-04-12 17:09:51 +03:00
|
|
|
ctx->cached_cq_tail++;
|
2022-04-26 21:21:28 +03:00
|
|
|
ctx->cqe_cached++;
|
|
|
|
return cqe;
|
2022-04-12 17:09:51 +03:00
|
|
|
}
|
2022-04-26 21:21:28 +03:00
|
|
|
|
2022-04-12 17:09:51 +03:00
|
|
|
return __io_get_cqe(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2022-02-04 17:51:14 +03:00
|
|
|
static void io_eventfd_signal(struct io_ring_ctx *ctx)
|
2020-01-08 21:04:00 +03:00
|
|
|
{
|
2022-02-04 17:51:14 +03:00
|
|
|
struct io_ev_fd *ev_fd;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
/*
|
|
|
|
* rcu_dereference ctx->io_ev_fd once and use it for both for checking
|
|
|
|
* and eventfd_signal
|
|
|
|
*/
|
|
|
|
ev_fd = rcu_dereference(ctx->io_ev_fd);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check again if ev_fd exists incase an io_eventfd_unregister call
|
|
|
|
* completed between the NULL check of ctx->io_ev_fd at the start of
|
|
|
|
* the function and rcu_read_lock.
|
|
|
|
*/
|
|
|
|
if (unlikely(!ev_fd))
|
|
|
|
goto out;
|
2020-05-15 19:38:05 +03:00
|
|
|
if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
|
2022-02-04 17:51:14 +03:00
|
|
|
goto out;
|
|
|
|
|
2022-02-04 17:51:15 +03:00
|
|
|
if (!ev_fd->eventfd_async || io_wq_current_is_worker())
|
2022-02-04 17:51:14 +03:00
|
|
|
eventfd_signal(ev_fd->cq_ev_fd, 1);
|
|
|
|
out:
|
|
|
|
rcu_read_unlock();
|
2020-01-08 21:04:00 +03:00
|
|
|
}
|
|
|
|
|
2022-03-17 05:03:42 +03:00
|
|
|
static inline void io_cqring_wake(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* wake_up_all() may seem excessive, but io_wake_function() and
|
|
|
|
* io_should_wake() handle the termination of the loop and only
|
|
|
|
* wake as many waiters as we need to.
|
|
|
|
*/
|
|
|
|
if (wq_has_sleeper(&ctx->cq_wait))
|
|
|
|
wake_up_all(&ctx->cq_wait);
|
|
|
|
}
|
|
|
|
|
2021-08-21 16:21:19 +03:00
|
|
|
/*
|
|
|
|
* This should only get called when at least one event has been posted.
|
|
|
|
* Some applications rely on the eventfd notification count only changing
|
|
|
|
* IFF a new CQE has been added to the CQ ring. There's no depedency on
|
|
|
|
* 1:1 relationship between how many times this function is called (and
|
|
|
|
* hence the eventfd count) and number of CQEs posted to the CQ ring.
|
|
|
|
*/
|
2022-05-25 15:25:13 +03:00
|
|
|
void io_cqring_ev_posted(struct io_ring_ctx *ctx)
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
{
|
2022-03-17 05:03:42 +03:00
|
|
|
if (unlikely(ctx->off_timeout_used || ctx->drain_active ||
|
|
|
|
ctx->has_evfd))
|
2022-03-17 05:03:41 +03:00
|
|
|
__io_commit_cqring_flush(ctx);
|
|
|
|
|
2022-03-17 05:03:42 +03:00
|
|
|
io_cqring_wake(ctx);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
}
|
|
|
|
|
2021-01-07 06:15:41 +03:00
|
|
|
static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2022-03-17 05:03:42 +03:00
|
|
|
if (unlikely(ctx->off_timeout_used || ctx->drain_active ||
|
|
|
|
ctx->has_evfd))
|
2022-03-17 05:03:41 +03:00
|
|
|
__io_commit_cqring_flush(ctx);
|
|
|
|
|
2022-03-17 05:03:42 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL)
|
|
|
|
io_cqring_wake(ctx);
|
2021-01-07 06:15:41 +03:00
|
|
|
}
|
|
|
|
|
2019-11-22 07:01:26 +03:00
|
|
|
/* Returns true if there are no backlogged entries after the flush */
|
2021-02-23 15:40:22 +03:00
|
|
|
static bool __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force)
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
{
|
2021-01-25 02:58:56 +03:00
|
|
|
bool all_flushed, posted;
|
2022-04-26 21:21:30 +03:00
|
|
|
size_t cqe_size = sizeof(struct io_uring_cqe);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
|
2021-05-17 00:58:08 +03:00
|
|
|
if (!force && __io_cqring_events(ctx) == ctx->cq_entries)
|
2020-12-17 03:24:37 +03:00
|
|
|
return false;
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
|
2022-04-26 21:21:30 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_CQE32)
|
|
|
|
cqe_size <<= 1;
|
|
|
|
|
2021-01-25 02:58:56 +03:00
|
|
|
posted = false;
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-02-23 15:40:22 +03:00
|
|
|
while (!list_empty(&ctx->cq_overflow_list)) {
|
2021-05-17 00:58:11 +03:00
|
|
|
struct io_uring_cqe *cqe = io_get_cqe(ctx);
|
2021-02-23 15:40:22 +03:00
|
|
|
struct io_overflow_cqe *ocqe;
|
2020-09-28 22:10:13 +03:00
|
|
|
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
if (!cqe && !force)
|
|
|
|
break;
|
2021-02-23 15:40:22 +03:00
|
|
|
ocqe = list_first_entry(&ctx->cq_overflow_list,
|
|
|
|
struct io_overflow_cqe, list);
|
|
|
|
if (cqe)
|
2022-04-26 21:21:30 +03:00
|
|
|
memcpy(cqe, &ocqe->cqe, cqe_size);
|
2021-02-23 15:40:22 +03:00
|
|
|
else
|
2021-05-17 00:58:10 +03:00
|
|
|
io_account_cq_overflow(ctx);
|
|
|
|
|
2021-01-25 02:58:56 +03:00
|
|
|
posted = true;
|
2021-02-23 15:40:22 +03:00
|
|
|
list_del(&ocqe->list);
|
|
|
|
kfree(ocqe);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
}
|
|
|
|
|
2020-12-17 03:24:38 +03:00
|
|
|
all_flushed = list_empty(&ctx->cq_overflow_list);
|
|
|
|
if (all_flushed) {
|
2022-04-21 12:13:43 +03:00
|
|
|
clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
|
2022-04-26 04:49:00 +03:00
|
|
|
atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
|
2020-12-17 03:24:38 +03:00
|
|
|
}
|
2020-07-30 18:43:49 +03:00
|
|
|
|
2022-03-22 01:02:20 +03:00
|
|
|
io_commit_cqring(ctx);
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-01-25 02:58:56 +03:00
|
|
|
if (posted)
|
|
|
|
io_cqring_ev_posted(ctx);
|
2020-12-17 03:24:38 +03:00
|
|
|
return all_flushed;
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
|
|
|
}
|
|
|
|
|
2021-08-09 22:18:12 +03:00
|
|
|
static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx)
|
2021-01-04 23:36:36 +03:00
|
|
|
{
|
2021-03-05 03:15:48 +03:00
|
|
|
bool ret = true;
|
|
|
|
|
2022-04-21 12:13:43 +03:00
|
|
|
if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) {
|
2021-01-04 23:36:36 +03:00
|
|
|
/* iopoll syncs against uring_lock, not completion_lock */
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL)
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-08-09 22:18:12 +03:00
|
|
|
ret = __io_cqring_overflow_flush(ctx, false);
|
2021-01-04 23:36:36 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL)
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
2021-03-05 03:15:48 +03:00
|
|
|
|
|
|
|
return ret;
|
2021-01-04 23:36:36 +03:00
|
|
|
}
|
|
|
|
|
2022-03-25 14:52:15 +03:00
|
|
|
static void __io_put_task(struct task_struct *task, int nr)
|
2021-08-09 15:04:13 +03:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = task->io_uring;
|
|
|
|
|
2022-03-25 14:52:15 +03:00
|
|
|
percpu_counter_sub(&tctx->inflight, nr);
|
|
|
|
if (unlikely(atomic_read(&tctx->in_idle)))
|
|
|
|
wake_up(&tctx->wait);
|
|
|
|
put_task_struct_many(task, nr);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* must to be called somewhat shortly after putting a request */
|
|
|
|
static inline void io_put_task(struct task_struct *task, int nr)
|
|
|
|
{
|
|
|
|
if (likely(task == current))
|
|
|
|
task->io_uring->cached_refs += nr;
|
|
|
|
else
|
|
|
|
__io_put_task(task, nr);
|
2021-08-09 15:04:13 +03:00
|
|
|
}
|
|
|
|
|
2021-08-27 13:55:01 +03:00
|
|
|
static void io_task_refs_refill(struct io_uring_task *tctx)
|
|
|
|
{
|
|
|
|
unsigned int refill = -tctx->cached_refs + IO_TCTX_REFS_CACHE_NR;
|
|
|
|
|
|
|
|
percpu_counter_add(&tctx->inflight, refill);
|
|
|
|
refcount_add(refill, ¤t->usage);
|
|
|
|
tctx->cached_refs += refill;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void io_get_task_refs(int nr)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
|
|
|
tctx->cached_refs -= nr;
|
|
|
|
if (unlikely(tctx->cached_refs < 0))
|
|
|
|
io_task_refs_refill(tctx);
|
|
|
|
}
|
|
|
|
|
2022-01-09 03:53:22 +03:00
|
|
|
static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = task->io_uring;
|
|
|
|
unsigned int refs = tctx->cached_refs;
|
|
|
|
|
|
|
|
if (refs) {
|
|
|
|
tctx->cached_refs = 0;
|
|
|
|
percpu_counter_sub(&tctx->inflight, refs);
|
|
|
|
put_task_struct_many(task, refs);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-25 16:32:17 +03:00
|
|
|
static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
|
2022-04-26 21:21:30 +03:00
|
|
|
s32 res, u32 cflags, u64 extra1,
|
|
|
|
u64 extra2)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-04-13 04:58:44 +03:00
|
|
|
struct io_overflow_cqe *ocqe;
|
2022-04-26 21:21:30 +03:00
|
|
|
size_t ocq_size = sizeof(struct io_overflow_cqe);
|
|
|
|
bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-04-26 21:21:30 +03:00
|
|
|
if (is_cqe32)
|
|
|
|
ocq_size += sizeof(struct io_uring_cqe);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-04-26 21:21:30 +03:00
|
|
|
ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
|
2022-04-21 12:13:41 +03:00
|
|
|
trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
|
2021-04-13 04:58:44 +03:00
|
|
|
if (!ocqe) {
|
|
|
|
/*
|
|
|
|
* If we're in ring overflow flush mode, or in task cancel mode,
|
|
|
|
* or cannot allocate an overflow entry, then we need to drop it
|
|
|
|
* on the floor.
|
|
|
|
*/
|
2021-05-17 00:58:10 +03:00
|
|
|
io_account_cq_overflow(ctx);
|
2022-04-21 12:13:44 +03:00
|
|
|
set_bit(IO_CHECK_CQ_DROPPED_BIT, &ctx->check_cq);
|
2021-04-13 04:58:44 +03:00
|
|
|
return false;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
2021-04-13 04:58:44 +03:00
|
|
|
if (list_empty(&ctx->cq_overflow_list)) {
|
2022-04-21 12:13:43 +03:00
|
|
|
set_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
|
2022-04-26 04:49:00 +03:00
|
|
|
atomic_or(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
|
2021-08-08 03:13:42 +03:00
|
|
|
|
2021-04-13 04:58:44 +03:00
|
|
|
}
|
2021-04-25 16:32:17 +03:00
|
|
|
ocqe->cqe.user_data = user_data;
|
2021-04-13 04:58:44 +03:00
|
|
|
ocqe->cqe.res = res;
|
|
|
|
ocqe->cqe.flags = cflags;
|
2022-04-26 21:21:30 +03:00
|
|
|
if (is_cqe32) {
|
|
|
|
ocqe->cqe.big_cqe[0] = extra1;
|
|
|
|
ocqe->cqe.big_cqe[1] = extra2;
|
|
|
|
}
|
2021-04-13 04:58:44 +03:00
|
|
|
list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
|
|
|
|
return true;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2022-06-15 13:23:02 +03:00
|
|
|
static inline bool __io_fill_cqe_req(struct io_ring_ctx *ctx,
|
|
|
|
struct io_kiocb *req)
|
2022-02-14 21:04:29 +03:00
|
|
|
{
|
2022-04-12 17:09:44 +03:00
|
|
|
struct io_uring_cqe *cqe;
|
|
|
|
|
2022-06-15 13:23:03 +03:00
|
|
|
if (!(ctx->flags & IORING_SETUP_CQE32)) {
|
|
|
|
trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
|
|
|
|
req->cqe.res, req->cqe.flags, 0, 0);
|
2022-04-12 17:09:44 +03:00
|
|
|
|
2022-06-15 13:23:03 +03:00
|
|
|
/*
|
|
|
|
* If we can't get a cq entry, userspace overflowed the
|
|
|
|
* submission (by quite a lot). Increment the overflow count in
|
|
|
|
* the ring.
|
|
|
|
*/
|
|
|
|
cqe = io_get_cqe(ctx);
|
|
|
|
if (likely(cqe)) {
|
|
|
|
memcpy(cqe, &req->cqe, sizeof(*cqe));
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return io_cqring_event_overflow(ctx, req->cqe.user_data,
|
|
|
|
req->cqe.res, req->cqe.flags,
|
|
|
|
0, 0);
|
|
|
|
} else {
|
2022-06-15 13:23:05 +03:00
|
|
|
u64 extra1 = 0, extra2 = 0;
|
|
|
|
|
|
|
|
if (req->flags & REQ_F_CQE32_INIT) {
|
|
|
|
extra1 = req->extra1;
|
|
|
|
extra2 = req->extra2;
|
|
|
|
}
|
2022-06-15 13:23:03 +03:00
|
|
|
|
|
|
|
trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
|
|
|
|
req->cqe.res, req->cqe.flags, extra1, extra2);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we can't get a cq entry, userspace overflowed the
|
|
|
|
* submission (by quite a lot). Increment the overflow count in
|
|
|
|
* the ring.
|
|
|
|
*/
|
|
|
|
cqe = io_get_cqe(ctx);
|
|
|
|
if (likely(cqe)) {
|
|
|
|
memcpy(cqe, &req->cqe, sizeof(struct io_uring_cqe));
|
|
|
|
WRITE_ONCE(cqe->big_cqe[0], extra1);
|
|
|
|
WRITE_ONCE(cqe->big_cqe[1], extra2);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return io_cqring_event_overflow(ctx, req->cqe.user_data,
|
|
|
|
req->cqe.res, req->cqe.flags,
|
|
|
|
extra1, extra2);
|
2022-04-12 17:09:44 +03:00
|
|
|
}
|
2022-02-14 21:04:29 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 15:25:13 +03:00
|
|
|
bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
|
|
|
|
u32 cflags)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
{
|
2022-06-15 13:23:06 +03:00
|
|
|
struct io_uring_cqe *cqe;
|
|
|
|
|
2021-11-10 18:49:31 +03:00
|
|
|
ctx->cq_extra++;
|
2022-04-26 21:21:31 +03:00
|
|
|
trace_io_uring_complete(ctx, NULL, user_data, res, cflags, 0, 0);
|
2022-06-15 13:23:06 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we can't get a cq entry, userspace overflowed the
|
|
|
|
* submission (by quite a lot). Increment the overflow count in
|
|
|
|
* the ring.
|
|
|
|
*/
|
|
|
|
cqe = io_get_cqe(ctx);
|
|
|
|
if (likely(cqe)) {
|
|
|
|
WRITE_ONCE(cqe->user_data, user_data);
|
|
|
|
WRITE_ONCE(cqe->res, res);
|
|
|
|
WRITE_ONCE(cqe->flags, cflags);
|
2022-06-15 13:23:07 +03:00
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
WRITE_ONCE(cqe->big_cqe[0], 0);
|
|
|
|
WRITE_ONCE(cqe->big_cqe[1], 0);
|
|
|
|
}
|
2022-06-15 13:23:06 +03:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
}
|
|
|
|
|
2022-04-26 21:21:27 +03:00
|
|
|
static void __io_req_complete_put(struct io_kiocb *req)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-02-10 05:53:37 +03:00
|
|
|
/*
|
|
|
|
* If we're the last reference to this request, add to our locked
|
|
|
|
* free_list cache.
|
|
|
|
*/
|
2021-02-24 23:28:27 +03:00
|
|
|
if (req_ref_put_and_test(req)) {
|
2022-04-26 21:21:27 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2022-04-16 00:08:29 +03:00
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS) {
|
2021-08-15 12:40:25 +03:00
|
|
|
if (req->flags & IO_DISARM_MASK)
|
2021-03-09 03:37:59 +03:00
|
|
|
io_disarm_next(req);
|
|
|
|
if (req->link) {
|
|
|
|
io_req_task_queue(req->link);
|
|
|
|
req->link = NULL;
|
|
|
|
}
|
|
|
|
}
|
2022-04-18 22:51:15 +03:00
|
|
|
io_req_put_rsrc(req);
|
2022-03-25 16:00:43 +03:00
|
|
|
/*
|
|
|
|
* Selected buffer deallocation in io_clean_op() assumes that
|
|
|
|
* we don't hold ->completion_lock. Clean them here to avoid
|
|
|
|
* deadlocks.
|
|
|
|
*/
|
|
|
|
io_put_kbuf_comp(req);
|
2021-02-10 05:53:37 +03:00
|
|
|
io_dismantle_req(req);
|
|
|
|
io_put_task(req->task, 1);
|
2021-09-24 23:59:47 +03:00
|
|
|
wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
|
2021-05-17 00:58:12 +03:00
|
|
|
ctx->locked_free_nr++;
|
2021-03-14 23:57:09 +03:00
|
|
|
}
|
2021-12-07 12:39:50 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 17:57:27 +03:00
|
|
|
void __io_req_complete_post(struct io_kiocb *req)
|
2022-04-26 21:21:27 +03:00
|
|
|
{
|
2022-05-25 00:21:00 +03:00
|
|
|
if (!(req->flags & REQ_F_CQE_SKIP))
|
2022-06-15 13:23:02 +03:00
|
|
|
__io_fill_cqe_req(req->ctx, req);
|
2022-04-26 21:21:27 +03:00
|
|
|
__io_req_complete_put(req);
|
|
|
|
}
|
|
|
|
|
2022-05-25 17:57:27 +03:00
|
|
|
void io_req_complete_post(struct io_kiocb *req)
|
2021-12-07 12:39:50 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
|
|
|
spin_lock(&ctx->completion_lock);
|
2022-05-25 00:21:00 +03:00
|
|
|
__io_req_complete_post(req);
|
2021-03-09 03:37:59 +03:00
|
|
|
io_commit_cqring(ctx);
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-09-15 14:04:20 +03:00
|
|
|
io_cqring_ev_posted(ctx);
|
2021-04-16 02:44:34 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 14:59:19 +03:00
|
|
|
inline void __io_req_complete(struct io_kiocb *req, unsigned issue_flags)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
{
|
2022-05-25 00:21:00 +03:00
|
|
|
if (issue_flags & IO_URING_F_COMPLETE_DEFER)
|
2022-05-24 21:45:38 +03:00
|
|
|
req->flags |= REQ_F_COMPLETE_INLINE;
|
2022-05-25 00:21:00 +03:00
|
|
|
else
|
|
|
|
io_req_complete_post(req);
|
2019-11-08 18:52:53 +03:00
|
|
|
}
|
|
|
|
|
2022-05-26 05:31:09 +03:00
|
|
|
void io_req_complete_failed(struct io_kiocb *req, s32 res)
|
2021-03-01 01:35:12 +03:00
|
|
|
{
|
2021-05-17 00:58:05 +03:00
|
|
|
req_set_fail(req);
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
|
|
|
|
io_req_complete_post(req);
|
2021-03-01 01:35:12 +03:00
|
|
|
}
|
|
|
|
|
2021-08-09 15:04:08 +03:00
|
|
|
/*
|
|
|
|
* Don't initialise the fields below on every allocation, but do that in
|
|
|
|
* advance and keep them valid across allocations.
|
|
|
|
*/
|
|
|
|
static void io_preinit_req(struct io_kiocb *req, struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
req->ctx = ctx;
|
|
|
|
req->link = NULL;
|
|
|
|
req->async_data = NULL;
|
|
|
|
/* not necessary, but safer to zero */
|
2022-04-12 17:09:43 +03:00
|
|
|
req->cqe.res = 0;
|
2021-08-09 15:04:08 +03:00
|
|
|
}
|
|
|
|
|
2021-03-19 20:22:39 +03:00
|
|
|
static void io_flush_cached_locked_reqs(struct io_ring_ctx *ctx,
|
2021-08-09 22:18:11 +03:00
|
|
|
struct io_submit_state *state)
|
2021-03-19 20:22:39 +03:00
|
|
|
{
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-09-24 23:59:47 +03:00
|
|
|
wq_list_splice(&ctx->locked_free_list, &state->free_list);
|
2021-05-17 00:58:12 +03:00
|
|
|
ctx->locked_free_nr = 0;
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-19 20:22:39 +03:00
|
|
|
}
|
|
|
|
|
2022-04-12 17:09:47 +03:00
|
|
|
static inline bool io_req_cache_empty(struct io_ring_ctx *ctx)
|
2019-11-08 18:52:53 +03:00
|
|
|
{
|
2022-04-12 17:09:47 +03:00
|
|
|
return !ctx->submit_state.free_list.next;
|
2019-11-08 18:52:53 +03:00
|
|
|
}
|
|
|
|
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 21:28:29 +03:00
|
|
|
/*
|
|
|
|
* A request might get retired back into the request caches even before opcode
|
|
|
|
* handlers and io_issue_sqe() are done with it, e.g. inline completion path.
|
|
|
|
* Because of that, io_alloc_req() should be called only under ->uring_lock
|
|
|
|
* and with extra caution to not get a request that is still worked on.
|
|
|
|
*/
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 21:28:29 +03:00
|
|
|
__must_hold(&ctx->uring_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-08-09 15:04:08 +03:00
|
|
|
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
|
2021-09-24 23:59:45 +03:00
|
|
|
void *reqs[IO_REQ_ALLOC_BATCH];
|
2021-08-09 15:04:08 +03:00
|
|
|
int ret, i;
|
2021-02-10 03:03:23 +03:00
|
|
|
|
2022-04-12 17:09:46 +03:00
|
|
|
/*
|
|
|
|
* If we have more than a batch's worth of requests in our IRQ side
|
|
|
|
* locked cache, grab the lock and move them over to our submission
|
|
|
|
* side cache.
|
|
|
|
*/
|
2022-04-16 00:08:33 +03:00
|
|
|
if (data_race(ctx->locked_free_nr) > IO_COMPL_BATCH) {
|
2022-04-12 17:09:46 +03:00
|
|
|
io_flush_cached_locked_reqs(ctx, &ctx->submit_state);
|
2022-04-12 17:09:47 +03:00
|
|
|
if (!io_req_cache_empty(ctx))
|
2022-04-12 17:09:46 +03:00
|
|
|
return true;
|
|
|
|
}
|
2021-02-10 03:03:23 +03:00
|
|
|
|
2021-09-24 23:59:45 +03:00
|
|
|
ret = kmem_cache_alloc_bulk(req_cachep, gfp, ARRAY_SIZE(reqs), reqs);
|
2019-03-15 01:30:06 +03:00
|
|
|
|
2021-08-09 15:04:08 +03:00
|
|
|
/*
|
|
|
|
* Bulk alloc is all-or-nothing. If we fail to get a batch,
|
|
|
|
* retry single alloc to be on the safe side.
|
|
|
|
*/
|
|
|
|
if (unlikely(ret <= 0)) {
|
2021-09-24 23:59:45 +03:00
|
|
|
reqs[0] = kmem_cache_alloc(req_cachep, gfp);
|
|
|
|
if (!reqs[0])
|
2021-10-04 22:02:49 +03:00
|
|
|
return false;
|
2021-08-09 15:04:08 +03:00
|
|
|
ret = 1;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
2021-08-09 15:04:08 +03:00
|
|
|
|
2021-10-04 22:02:53 +03:00
|
|
|
percpu_ref_get_many(&ctx->refs, ret);
|
2021-09-24 23:59:45 +03:00
|
|
|
for (i = 0; i < ret; i++) {
|
2022-04-12 17:09:46 +03:00
|
|
|
struct io_kiocb *req = reqs[i];
|
2021-09-24 23:59:45 +03:00
|
|
|
|
|
|
|
io_preinit_req(req, ctx);
|
2022-04-12 17:09:48 +03:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-09-24 23:59:45 +03:00
|
|
|
}
|
2021-10-04 22:02:49 +03:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool io_alloc_req_refill(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2022-04-12 17:09:47 +03:00
|
|
|
if (unlikely(io_req_cache_empty(ctx)))
|
2021-10-04 22:02:49 +03:00
|
|
|
return __io_alloc_req_refill(ctx);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_kiocb *io_alloc_req(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_wq_work_node *node;
|
|
|
|
|
|
|
|
node = wq_stack_extract(&ctx->submit_state.free_list);
|
2021-09-24 23:59:47 +03:00
|
|
|
return container_of(node, struct io_kiocb, comp_list);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-09-08 18:40:50 +03:00
|
|
|
static inline void io_dismantle_req(struct io_kiocb *req)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-03-19 20:22:42 +03:00
|
|
|
unsigned int flags = req->flags;
|
2020-02-19 00:19:09 +03:00
|
|
|
|
2021-10-04 22:02:58 +03:00
|
|
|
if (unlikely(flags & IO_REQ_CLEAN_FLAGS))
|
2021-04-20 14:03:31 +03:00
|
|
|
io_clean_op(req);
|
2021-03-19 20:22:43 +03:00
|
|
|
if (!(flags & REQ_F_FIXED_FILE))
|
|
|
|
io_put_file(req->file);
|
2019-03-12 19:16:44 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 17:57:27 +03:00
|
|
|
__cold void io_free_req(struct io_kiocb *req)
|
2019-12-28 22:11:08 +03:00
|
|
|
{
|
2020-08-10 19:55:56 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-12-28 22:11:08 +03:00
|
|
|
|
2022-04-18 22:51:15 +03:00
|
|
|
io_req_put_rsrc(req);
|
2020-10-13 11:44:00 +03:00
|
|
|
io_dismantle_req(req);
|
2021-01-25 14:42:21 +03:00
|
|
|
io_put_task(req->task, 1);
|
2019-12-28 22:11:08 +03:00
|
|
|
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-09-24 23:59:47 +03:00
|
|
|
wq_list_add_head(&req->comp_list, &ctx->locked_free_list);
|
2021-08-09 22:18:08 +03:00
|
|
|
ctx->locked_free_nr++;
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2019-03-12 19:16:44 +03:00
|
|
|
}
|
|
|
|
|
2021-09-08 18:40:51 +03:00
|
|
|
static void __io_req_find_next_prep(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
bool posted;
|
|
|
|
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
posted = io_disarm_next(req);
|
2022-03-22 01:02:20 +03:00
|
|
|
io_commit_cqring(ctx);
|
2021-09-08 18:40:51 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
if (posted)
|
|
|
|
io_cqring_ev_posted(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
|
2019-11-09 06:00:08 +03:00
|
|
|
{
|
2021-03-09 03:37:58 +03:00
|
|
|
struct io_kiocb *nxt;
|
2019-11-21 23:21:01 +03:00
|
|
|
|
2019-05-11 01:07:28 +03:00
|
|
|
/*
|
|
|
|
* If LINK is set, we have dependent requests in this chain. If we
|
|
|
|
* didn't fail this request, queue the first one up, moving any other
|
|
|
|
* dependencies to the next request. In case of failure, fail the rest
|
|
|
|
* of the chain.
|
|
|
|
*/
|
2021-09-08 18:40:51 +03:00
|
|
|
if (unlikely(req->flags & IO_DISARM_MASK))
|
|
|
|
__io_req_find_next_prep(req);
|
2021-03-09 03:37:58 +03:00
|
|
|
nxt = req->link;
|
|
|
|
req->link = NULL;
|
|
|
|
return nxt;
|
2019-11-20 23:03:52 +03:00
|
|
|
}
|
2019-05-11 01:07:28 +03:00
|
|
|
|
2021-08-18 14:42:46 +03:00
|
|
|
static void ctx_flush_and_put(struct io_ring_ctx *ctx, bool *locked)
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-01 01:04:53 +03:00
|
|
|
{
|
|
|
|
if (!ctx)
|
|
|
|
return;
|
2022-04-26 04:49:04 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
2021-08-18 14:42:46 +03:00
|
|
|
if (*locked) {
|
2021-09-08 18:40:52 +03:00
|
|
|
io_submit_flush_completions(ctx);
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-01 01:04:53 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-08-18 14:42:46 +03:00
|
|
|
*locked = false;
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-01 01:04:53 +03:00
|
|
|
}
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
|
2021-12-08 08:21:25 +03:00
|
|
|
static inline void ctx_commit_and_unlock(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
io_commit_cqring(ctx);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
io_cqring_ev_posted(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_prev_tw_list(struct io_wq_work_node *node,
|
|
|
|
struct io_ring_ctx **ctx, bool *uring_locked)
|
|
|
|
{
|
|
|
|
if (*ctx && !*uring_locked)
|
|
|
|
spin_lock(&(*ctx)->completion_lock);
|
|
|
|
|
|
|
|
do {
|
|
|
|
struct io_wq_work_node *next = node->next;
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
|
2022-03-24 19:17:44 +03:00
|
|
|
prefetch(container_of(next, struct io_kiocb, io_task_work.node));
|
|
|
|
|
2021-12-08 08:21:25 +03:00
|
|
|
if (req->ctx != *ctx) {
|
|
|
|
if (unlikely(!*uring_locked && *ctx))
|
|
|
|
ctx_commit_and_unlock(*ctx);
|
|
|
|
|
|
|
|
ctx_flush_and_put(*ctx, uring_locked);
|
|
|
|
*ctx = req->ctx;
|
|
|
|
/* if not contended, grab and improve batching */
|
|
|
|
*uring_locked = mutex_trylock(&(*ctx)->uring_lock);
|
|
|
|
percpu_ref_get(&(*ctx)->refs);
|
|
|
|
if (unlikely(!*uring_locked))
|
|
|
|
spin_lock(&(*ctx)->completion_lock);
|
|
|
|
}
|
2022-05-25 00:21:00 +03:00
|
|
|
if (likely(*uring_locked)) {
|
2021-12-08 08:21:25 +03:00
|
|
|
req->io_task_work.func(req, uring_locked);
|
2022-05-25 00:21:00 +03:00
|
|
|
} else {
|
|
|
|
req->cqe.flags = io_put_kbuf_comp(req);
|
|
|
|
__io_req_complete_post(req);
|
|
|
|
}
|
2021-12-08 08:21:25 +03:00
|
|
|
node = next;
|
|
|
|
} while (node);
|
|
|
|
|
|
|
|
if (unlikely(!*uring_locked))
|
|
|
|
ctx_commit_and_unlock(*ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_tw_list(struct io_wq_work_node *node,
|
|
|
|
struct io_ring_ctx **ctx, bool *locked)
|
2021-12-07 12:39:49 +03:00
|
|
|
{
|
|
|
|
do {
|
|
|
|
struct io_wq_work_node *next = node->next;
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
|
2022-03-24 19:17:44 +03:00
|
|
|
prefetch(container_of(next, struct io_kiocb, io_task_work.node));
|
|
|
|
|
2021-12-07 12:39:49 +03:00
|
|
|
if (req->ctx != *ctx) {
|
|
|
|
ctx_flush_and_put(*ctx, locked);
|
|
|
|
*ctx = req->ctx;
|
|
|
|
/* if not contended, grab and improve batching */
|
|
|
|
*locked = mutex_trylock(&(*ctx)->uring_lock);
|
|
|
|
percpu_ref_get(&(*ctx)->refs);
|
|
|
|
}
|
|
|
|
req->io_task_work.func(req, locked);
|
|
|
|
node = next;
|
|
|
|
} while (node);
|
|
|
|
}
|
|
|
|
|
2022-05-25 20:01:04 +03:00
|
|
|
void tctx_task_work(struct callback_head *cb)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
{
|
2021-12-08 08:21:25 +03:00
|
|
|
bool uring_locked = false;
|
2021-06-17 20:14:07 +03:00
|
|
|
struct io_ring_ctx *ctx = NULL;
|
2021-06-17 20:14:06 +03:00
|
|
|
struct io_uring_task *tctx = container_of(cb, struct io_uring_task,
|
|
|
|
task_work);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
|
2021-06-17 20:14:09 +03:00
|
|
|
while (1) {
|
2021-12-08 08:21:25 +03:00
|
|
|
struct io_wq_work_node *node1, *node2;
|
2021-06-17 20:14:06 +03:00
|
|
|
|
|
|
|
spin_lock_irq(&tctx->task_lock);
|
2022-05-21 18:17:05 +03:00
|
|
|
node1 = tctx->prio_task_list.first;
|
2021-12-08 08:21:25 +03:00
|
|
|
node2 = tctx->task_list.first;
|
2021-06-17 20:14:06 +03:00
|
|
|
INIT_WQ_LIST(&tctx->task_list);
|
2022-05-21 18:17:05 +03:00
|
|
|
INIT_WQ_LIST(&tctx->prio_task_list);
|
2021-12-08 08:21:25 +03:00
|
|
|
if (!node2 && !node1)
|
2021-08-10 19:53:55 +03:00
|
|
|
tctx->task_running = false;
|
2021-06-17 20:14:06 +03:00
|
|
|
spin_unlock_irq(&tctx->task_lock);
|
2021-12-08 08:21:25 +03:00
|
|
|
if (!node2 && !node1)
|
2021-08-10 19:53:55 +03:00
|
|
|
break;
|
2021-06-17 20:14:06 +03:00
|
|
|
|
2021-12-08 08:21:25 +03:00
|
|
|
if (node1)
|
|
|
|
handle_prev_tw_list(node1, &ctx, &uring_locked);
|
|
|
|
if (node2)
|
|
|
|
handle_tw_list(node2, &ctx, &uring_locked);
|
2021-02-10 03:03:20 +03:00
|
|
|
cond_resched();
|
2022-03-22 01:02:19 +03:00
|
|
|
|
2022-04-16 00:08:33 +03:00
|
|
|
if (data_race(!tctx->task_list.first) &&
|
2022-05-21 18:17:05 +03:00
|
|
|
data_race(!tctx->prio_task_list.first) && uring_locked)
|
2022-03-22 01:02:19 +03:00
|
|
|
io_submit_flush_completions(ctx);
|
2021-06-17 20:14:06 +03:00
|
|
|
}
|
2021-06-17 20:14:07 +03:00
|
|
|
|
2021-12-08 08:21:25 +03:00
|
|
|
ctx_flush_and_put(ctx, &uring_locked);
|
2022-01-09 03:53:22 +03:00
|
|
|
|
|
|
|
/* relaxed read is enough as only the task itself sets ->in_idle */
|
|
|
|
if (unlikely(atomic_read(&tctx->in_idle)))
|
|
|
|
io_uring_drop_tctx_refs(current);
|
2021-02-10 03:03:20 +03:00
|
|
|
}
|
|
|
|
|
2022-05-21 18:17:05 +03:00
|
|
|
static void __io_req_task_work_add(struct io_kiocb *req,
|
|
|
|
struct io_uring_task *tctx,
|
|
|
|
struct io_wq_work_list *list)
|
2021-02-10 03:03:20 +03:00
|
|
|
{
|
2022-04-26 04:49:02 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-07-01 15:26:05 +03:00
|
|
|
struct io_wq_work_node *node;
|
2021-02-16 20:33:53 +03:00
|
|
|
unsigned long flags;
|
2021-08-10 19:53:55 +03:00
|
|
|
bool running;
|
2021-02-10 03:03:20 +03:00
|
|
|
|
2021-02-16 20:33:53 +03:00
|
|
|
spin_lock_irqsave(&tctx->task_lock, flags);
|
2022-05-21 18:17:05 +03:00
|
|
|
wq_list_add_tail(&req->io_task_work.node, list);
|
2021-08-10 19:53:55 +03:00
|
|
|
running = tctx->task_running;
|
|
|
|
if (!running)
|
|
|
|
tctx->task_running = true;
|
2021-02-16 20:33:53 +03:00
|
|
|
spin_unlock_irqrestore(&tctx->task_lock, flags);
|
2021-02-10 03:03:20 +03:00
|
|
|
|
|
|
|
/* task_work already pending, we're done */
|
2021-08-10 19:53:55 +03:00
|
|
|
if (running)
|
2021-07-01 15:26:05 +03:00
|
|
|
return;
|
2021-02-10 03:03:20 +03:00
|
|
|
|
2022-04-26 04:49:04 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
|
|
|
|
2022-05-21 18:17:05 +03:00
|
|
|
if (likely(!task_work_add(req->task, &tctx->task_work, ctx->notify_method)))
|
2021-07-01 15:26:05 +03:00
|
|
|
return;
|
2021-08-09 15:04:06 +03:00
|
|
|
|
2021-02-16 20:33:53 +03:00
|
|
|
spin_lock_irqsave(&tctx->task_lock, flags);
|
2021-08-10 19:53:55 +03:00
|
|
|
tctx->task_running = false;
|
2022-05-21 18:17:05 +03:00
|
|
|
node = wq_list_merge(&tctx->prio_task_list, &tctx->task_list);
|
2021-02-16 20:33:53 +03:00
|
|
|
spin_unlock_irqrestore(&tctx->task_lock, flags);
|
2021-02-10 03:03:20 +03:00
|
|
|
|
2021-07-01 15:26:05 +03:00
|
|
|
while (node) {
|
|
|
|
req = container_of(node, struct io_kiocb, io_task_work.node);
|
|
|
|
node = node->next;
|
|
|
|
if (llist_add(&req->io_task_work.fallback_node,
|
|
|
|
&req->ctx->fallback_llist))
|
|
|
|
schedule_delayed_work(&req->ctx->fallback_work, 1);
|
|
|
|
}
|
2021-01-19 16:32:42 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 14:59:19 +03:00
|
|
|
void io_req_task_work_add(struct io_kiocb *req)
|
2022-05-21 18:17:05 +03:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
|
|
|
|
|
|
|
__io_req_task_work_add(req, tctx, &tctx->task_list);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void io_req_task_prio_work_add(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
|
|
|
|
|
|
|
if (req->ctx->flags & IORING_SETUP_SQPOLL)
|
|
|
|
__io_req_task_work_add(req, tctx, &tctx->prio_task_list);
|
|
|
|
else
|
|
|
|
__io_req_task_work_add(req, tctx, &tctx->task_list);
|
|
|
|
}
|
|
|
|
|
2022-04-16 00:08:23 +03:00
|
|
|
static void io_req_tw_post(struct io_kiocb *req, bool *locked)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
{
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_complete_post(req);
|
2022-04-16 00:08:23 +03:00
|
|
|
}
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
|
2022-05-25 17:57:27 +03:00
|
|
|
void io_req_tw_post_queue(struct io_kiocb *req, s32 res, u32 cflags)
|
2022-04-16 00:08:23 +03:00
|
|
|
{
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, res, cflags);
|
2022-04-16 00:08:23 +03:00
|
|
|
req->io_task_work.func = io_req_tw_post;
|
2022-05-21 18:17:05 +03:00
|
|
|
io_req_task_work_add(req);
|
2022-04-16 00:08:23 +03:00
|
|
|
}
|
|
|
|
|
2021-08-18 14:42:46 +03:00
|
|
|
static void io_req_task_cancel(struct io_kiocb *req, bool *locked)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
{
|
2021-08-25 22:51:39 +03:00
|
|
|
/* not needed for normal modes, but SQPOLL depends on it */
|
2022-04-16 00:08:22 +03:00
|
|
|
io_tw_lock(req->ctx, locked);
|
2022-04-12 17:09:43 +03:00
|
|
|
io_req_complete_failed(req, req->cqe.res);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
}
|
|
|
|
|
2022-05-26 05:31:09 +03:00
|
|
|
void io_req_task_submit(struct io_kiocb *req, bool *locked)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
{
|
2022-04-16 00:08:22 +03:00
|
|
|
io_tw_lock(req->ctx, locked);
|
2021-08-19 18:41:42 +03:00
|
|
|
/* req->task == current here, checking PF_EXITING is safe */
|
2021-08-09 15:04:19 +03:00
|
|
|
if (likely(!(req->task->flags & PF_EXITING)))
|
2022-04-16 00:08:26 +03:00
|
|
|
io_queue_sqe(req);
|
2021-01-04 23:36:35 +03:00
|
|
|
else
|
2021-03-19 20:22:40 +03:00
|
|
|
io_req_complete_failed(req, -EFAULT);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 17:57:27 +03:00
|
|
|
void io_req_task_queue_fail(struct io_kiocb *req, int ret)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
{
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, ret, 0);
|
2021-06-30 23:54:04 +03:00
|
|
|
req->io_task_work.func = io_req_task_cancel;
|
2022-05-21 18:17:05 +03:00
|
|
|
io_req_task_work_add(req);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-26 00:39:59 +03:00
|
|
|
}
|
|
|
|
|
2021-03-01 01:35:10 +03:00
|
|
|
static void io_req_task_queue(struct io_kiocb *req)
|
2021-02-19 01:32:52 +03:00
|
|
|
{
|
2021-06-30 23:54:04 +03:00
|
|
|
req->io_task_work.func = io_req_task_submit;
|
2022-05-21 18:17:05 +03:00
|
|
|
io_req_task_work_add(req);
|
2021-02-19 01:32:52 +03:00
|
|
|
}
|
|
|
|
|
2021-07-27 19:25:55 +03:00
|
|
|
static void io_req_task_queue_reissue(struct io_kiocb *req)
|
|
|
|
{
|
2022-04-16 00:08:27 +03:00
|
|
|
req->io_task_work.func = io_queue_iowq;
|
2022-05-21 18:17:05 +03:00
|
|
|
io_req_task_work_add(req);
|
2021-07-27 19:25:55 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 17:57:27 +03:00
|
|
|
void io_queue_next(struct io_kiocb *req)
|
2019-11-09 06:00:08 +03:00
|
|
|
{
|
2020-06-29 13:13:00 +03:00
|
|
|
struct io_kiocb *nxt = io_req_find_next(req);
|
2019-11-21 23:21:01 +03:00
|
|
|
|
|
|
|
if (nxt)
|
2020-06-27 14:04:55 +03:00
|
|
|
io_req_task_queue(nxt);
|
2019-11-09 06:00:08 +03:00
|
|
|
}
|
|
|
|
|
2021-09-24 23:59:50 +03:00
|
|
|
static void io_free_batch_list(struct io_ring_ctx *ctx,
|
2021-09-24 23:59:54 +03:00
|
|
|
struct io_wq_work_node *node)
|
2021-09-24 23:59:50 +03:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2020-07-18 11:32:52 +03:00
|
|
|
{
|
2021-09-24 23:59:53 +03:00
|
|
|
struct task_struct *task = NULL;
|
2021-10-04 22:02:53 +03:00
|
|
|
int task_refs = 0;
|
2020-07-18 11:32:52 +03:00
|
|
|
|
2021-09-24 23:59:50 +03:00
|
|
|
do {
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
comp_list);
|
2020-06-28 12:52:33 +03:00
|
|
|
|
2022-03-22 01:02:22 +03:00
|
|
|
if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
|
|
|
|
if (req->flags & REQ_F_REFCOUNT) {
|
|
|
|
node = req->comp_list.next;
|
|
|
|
if (!req_ref_put_and_test(req))
|
|
|
|
continue;
|
|
|
|
}
|
2022-03-22 01:02:23 +03:00
|
|
|
if ((req->flags & REQ_F_POLLED) && req->apoll) {
|
|
|
|
struct async_poll *apoll = req->apoll;
|
|
|
|
|
|
|
|
if (apoll->double_poll)
|
|
|
|
kfree(apoll->double_poll);
|
|
|
|
list_add(&apoll->poll.wait.entry,
|
|
|
|
&ctx->apoll_cache);
|
|
|
|
req->flags &= ~REQ_F_POLLED;
|
|
|
|
}
|
2022-04-16 00:08:29 +03:00
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS)
|
2022-03-22 01:02:24 +03:00
|
|
|
io_queue_next(req);
|
2022-03-22 01:02:22 +03:00
|
|
|
if (unlikely(req->flags & IO_REQ_CLEAN_FLAGS))
|
|
|
|
io_clean_op(req);
|
2021-10-04 22:02:55 +03:00
|
|
|
}
|
2022-03-22 01:02:22 +03:00
|
|
|
if (!(req->flags & REQ_F_FIXED_FILE))
|
|
|
|
io_put_file(req->file);
|
2020-06-28 12:52:33 +03:00
|
|
|
|
2021-10-10 01:14:41 +03:00
|
|
|
io_req_put_rsrc_locked(req, ctx);
|
2020-07-18 11:32:52 +03:00
|
|
|
|
2021-09-24 23:59:53 +03:00
|
|
|
if (req->task != task) {
|
|
|
|
if (task)
|
|
|
|
io_put_task(task, task_refs);
|
|
|
|
task = req->task;
|
|
|
|
task_refs = 0;
|
|
|
|
}
|
|
|
|
task_refs++;
|
2021-10-04 22:02:55 +03:00
|
|
|
node = req->comp_list.next;
|
2022-04-12 17:09:48 +03:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-09-24 23:59:50 +03:00
|
|
|
} while (node);
|
2021-09-24 23:59:53 +03:00
|
|
|
|
|
|
|
if (task)
|
|
|
|
io_put_task(task, task_refs);
|
2020-03-03 21:33:13 +03:00
|
|
|
}
|
|
|
|
|
2021-09-08 18:40:52 +03:00
|
|
|
static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
|
2021-08-12 21:48:34 +03:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-10 03:03:14 +03:00
|
|
|
{
|
2021-09-24 23:59:44 +03:00
|
|
|
struct io_wq_work_node *node, *prev;
|
2021-08-09 22:18:11 +03:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
2021-02-10 03:03:14 +03:00
|
|
|
|
2021-11-10 18:49:33 +03:00
|
|
|
if (state->flush_cqes) {
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
wq_list_for_each(node, prev, &state->compl_reqs) {
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
2021-09-24 23:59:44 +03:00
|
|
|
comp_list);
|
2021-06-26 23:40:48 +03:00
|
|
|
|
2022-06-15 13:23:03 +03:00
|
|
|
if (!(req->flags & REQ_F_CQE_SKIP))
|
|
|
|
__io_fill_cqe_req(ctx, req);
|
2021-11-10 18:49:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
io_commit_cqring(ctx);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
io_cqring_ev_posted(ctx);
|
|
|
|
state->flush_cqes = false;
|
2021-02-10 03:03:14 +03:00
|
|
|
}
|
2021-06-26 23:40:48 +03:00
|
|
|
|
2021-09-24 23:59:54 +03:00
|
|
|
io_free_batch_list(ctx, state->compl_reqs.first);
|
2021-09-24 23:59:44 +03:00
|
|
|
INIT_WQ_LIST(&state->compl_reqs);
|
2020-03-03 21:33:13 +03:00
|
|
|
}
|
|
|
|
|
2019-09-28 20:36:45 +03:00
|
|
|
/*
|
|
|
|
* Drop reference to request, return next in chain (if there is one) if this
|
|
|
|
* was the last reference to this request.
|
|
|
|
*/
|
2021-03-19 20:22:37 +03:00
|
|
|
static inline struct io_kiocb *io_put_req_find_next(struct io_kiocb *req)
|
2019-03-12 19:16:44 +03:00
|
|
|
{
|
2020-06-29 13:13:00 +03:00
|
|
|
struct io_kiocb *nxt = NULL;
|
|
|
|
|
2021-02-24 23:28:27 +03:00
|
|
|
if (req_ref_put_and_test(req)) {
|
2022-04-16 00:08:29 +03:00
|
|
|
if (unlikely(req->flags & IO_REQ_LINK_FLAGS))
|
2022-03-22 01:02:21 +03:00
|
|
|
nxt = io_req_find_next(req);
|
2022-04-16 00:08:24 +03:00
|
|
|
io_free_req(req);
|
2020-02-25 23:25:41 +03:00
|
|
|
}
|
2020-06-29 13:13:00 +03:00
|
|
|
return nxt;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-01-04 23:36:36 +03:00
|
|
|
static unsigned io_cqring_events(struct io_ring_ctx *ctx)
|
2019-08-20 20:03:11 +03:00
|
|
|
{
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_rmb();
|
2020-12-17 03:24:37 +03:00
|
|
|
return __io_cqring_events(ctx);
|
2019-08-20 20:03:11 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 18:13:39 +03:00
|
|
|
int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
|
2019-01-09 18:59:42 +03:00
|
|
|
{
|
2021-09-24 23:59:49 +03:00
|
|
|
struct io_wq_work_node *pos, *start, *prev;
|
2021-10-12 14:12:20 +03:00
|
|
|
unsigned int poll_flags = BLK_POLL_NOSLEEP;
|
2021-10-12 18:28:46 +03:00
|
|
|
DEFINE_IO_COMP_BATCH(iob);
|
2021-09-24 23:59:43 +03:00
|
|
|
int nr_events = 0;
|
2019-01-09 18:59:42 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Only spin for completions if we don't have multiple devices hanging
|
2021-09-24 23:59:42 +03:00
|
|
|
* off our complete list.
|
2019-01-09 18:59:42 +03:00
|
|
|
*/
|
2021-09-24 23:59:42 +03:00
|
|
|
if (ctx->poll_multi_queue || force_nonspin)
|
2021-10-12 14:12:19 +03:00
|
|
|
poll_flags |= BLK_POLL_ONESHOT;
|
2019-01-09 18:59:42 +03:00
|
|
|
|
2021-09-24 23:59:49 +03:00
|
|
|
wq_list_for_each(pos, start, &ctx->iopoll_list) {
|
|
|
|
struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list);
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2021-08-09 15:04:09 +03:00
|
|
|
int ret;
|
2019-01-09 18:59:42 +03:00
|
|
|
|
|
|
|
/*
|
2020-04-03 23:51:33 +03:00
|
|
|
* Move completed and retryable entries to our local lists.
|
|
|
|
* If we find a request that requires polling, break out
|
|
|
|
* and complete those lists first, if we have entries there.
|
2019-01-09 18:59:42 +03:00
|
|
|
*/
|
2021-09-24 23:59:48 +03:00
|
|
|
if (READ_ONCE(req->iopoll_completed))
|
2019-01-09 18:59:42 +03:00
|
|
|
break;
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
ret = rw->kiocb.ki_filp->f_op->iopoll(&rw->kiocb, &iob, poll_flags);
|
2021-08-09 15:04:09 +03:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
|
|
|
else if (ret)
|
2021-10-12 14:12:19 +03:00
|
|
|
poll_flags |= BLK_POLL_ONESHOT;
|
2019-01-09 18:59:42 +03:00
|
|
|
|
2020-07-06 17:59:29 +03:00
|
|
|
/* iopoll may have completed current req */
|
2021-10-12 18:28:46 +03:00
|
|
|
if (!rq_list_empty(iob.req_list) ||
|
|
|
|
READ_ONCE(req->iopoll_completed))
|
2021-09-24 23:59:48 +03:00
|
|
|
break;
|
2019-01-09 18:59:42 +03:00
|
|
|
}
|
|
|
|
|
2021-10-12 18:28:46 +03:00
|
|
|
if (!rq_list_empty(iob.req_list))
|
|
|
|
iob.complete(&iob);
|
2021-09-24 23:59:49 +03:00
|
|
|
else if (!pos)
|
|
|
|
return 0;
|
2019-01-09 18:59:42 +03:00
|
|
|
|
2021-09-24 23:59:49 +03:00
|
|
|
prev = start;
|
|
|
|
wq_list_for_each_resume(pos, prev) {
|
|
|
|
struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list);
|
|
|
|
|
2021-09-24 23:59:51 +03:00
|
|
|
/* order with io_complete_rw_iopoll(), e.g. ->result updates */
|
|
|
|
if (!smp_load_acquire(&req->iopoll_completed))
|
2021-09-24 23:59:48 +03:00
|
|
|
break;
|
2022-04-17 12:10:34 +03:00
|
|
|
nr_events++;
|
2021-12-05 17:37:59 +03:00
|
|
|
if (unlikely(req->flags & REQ_F_CQE_SKIP))
|
|
|
|
continue;
|
2022-06-15 13:23:02 +03:00
|
|
|
|
|
|
|
req->cqe.flags = io_put_kbuf(req, 0);
|
|
|
|
__io_fill_cqe_req(req->ctx, req);
|
2021-09-24 23:59:48 +03:00
|
|
|
}
|
2019-01-09 18:59:42 +03:00
|
|
|
|
2021-09-24 23:59:52 +03:00
|
|
|
if (unlikely(!nr_events))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
io_commit_cqring(ctx);
|
|
|
|
io_cqring_ev_posted_iopoll(ctx);
|
2021-09-24 23:59:54 +03:00
|
|
|
pos = start ? start->next : ctx->iopoll_list.first;
|
2021-09-24 23:59:49 +03:00
|
|
|
wq_list_cut(&ctx->iopoll_list, prev, start);
|
2021-09-24 23:59:54 +03:00
|
|
|
io_free_batch_list(ctx, pos);
|
2021-09-24 23:59:43 +03:00
|
|
|
return nr_events;
|
2019-01-09 18:59:42 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We can't just wait for polled events to come to us, we have to actively
|
|
|
|
* find and complete them.
|
|
|
|
*/
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
|
2019-01-09 18:59:42 +03:00
|
|
|
{
|
|
|
|
if (!(ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-09-24 23:59:49 +03:00
|
|
|
while (!wq_list_empty(&ctx->iopoll_list)) {
|
2020-07-07 16:36:22 +03:00
|
|
|
/* let it sleep and repeat later if can't complete a request */
|
2021-09-24 23:59:43 +03:00
|
|
|
if (io_do_iopoll(ctx, true) == 0)
|
2020-07-07 16:36:22 +03:00
|
|
|
break;
|
2019-08-22 07:19:11 +03:00
|
|
|
/*
|
|
|
|
* Ensure we allow local-to-the-cpu processing to take place,
|
|
|
|
* in this case we need to ensure that we reap all events.
|
2020-07-06 17:59:31 +03:00
|
|
|
* Also let task_work, etc. to progress by releasing the mutex
|
2019-08-22 07:19:11 +03:00
|
|
|
*/
|
2020-07-06 17:59:31 +03:00
|
|
|
if (need_resched()) {
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
cond_resched();
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
2019-01-09 18:59:42 +03:00
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2020-07-07 16:36:21 +03:00
|
|
|
static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
|
2019-01-09 18:59:42 +03:00
|
|
|
{
|
2020-07-07 16:36:21 +03:00
|
|
|
unsigned int nr_events = 0;
|
2021-04-13 04:58:45 +03:00
|
|
|
int ret = 0;
|
2022-04-21 12:13:44 +03:00
|
|
|
unsigned long check_cq;
|
2019-08-19 21:15:59 +03:00
|
|
|
|
2021-04-13 04:58:46 +03:00
|
|
|
/*
|
|
|
|
* Don't enter poll loop if we already have events pending.
|
|
|
|
* If we do, we can potentially be spinning for commands that
|
|
|
|
* already triggered a CQE (eg in error).
|
|
|
|
*/
|
2022-04-21 12:13:44 +03:00
|
|
|
check_cq = READ_ONCE(ctx->check_cq);
|
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
|
2021-04-13 04:58:46 +03:00
|
|
|
__io_cqring_overflow_flush(ctx, false);
|
|
|
|
if (io_cqring_events(ctx))
|
2022-03-22 17:07:58 +03:00
|
|
|
return 0;
|
2022-04-21 12:13:44 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Similarly do not spin if we have not informed the user of any
|
|
|
|
* dropped CQE.
|
|
|
|
*/
|
|
|
|
if (unlikely(check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)))
|
|
|
|
return -EBADR;
|
|
|
|
|
2019-01-09 18:59:42 +03:00
|
|
|
do {
|
2019-08-19 21:15:59 +03:00
|
|
|
/*
|
|
|
|
* If a submit got punted to a workqueue, we can have the
|
|
|
|
* application entering polling for a command before it gets
|
|
|
|
* issued. That app will hold the uring_lock for the duration
|
|
|
|
* of the poll right here, so we need to take a breather every
|
|
|
|
* now and then to ensure that the issue has a chance to add
|
|
|
|
* the poll to the issued list. Otherwise we can spin here
|
|
|
|
* forever, while the workqueue is stuck trying to acquire the
|
|
|
|
* very same mutex.
|
|
|
|
*/
|
2021-09-24 23:59:49 +03:00
|
|
|
if (wq_list_empty(&ctx->iopoll_list)) {
|
2021-07-08 15:37:06 +03:00
|
|
|
u32 tail = ctx->cached_cq_tail;
|
|
|
|
|
2019-08-19 21:15:59 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2020-07-01 20:29:10 +03:00
|
|
|
io_run_task_work();
|
2019-08-19 21:15:59 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2019-01-09 18:59:42 +03:00
|
|
|
|
2021-07-08 15:37:06 +03:00
|
|
|
/* some requests don't go through iopoll_list */
|
|
|
|
if (tail != ctx->cached_cq_tail ||
|
2021-09-24 23:59:49 +03:00
|
|
|
wq_list_empty(&ctx->iopoll_list))
|
2021-04-13 04:58:45 +03:00
|
|
|
break;
|
2019-08-19 21:15:59 +03:00
|
|
|
}
|
2021-09-24 23:59:43 +03:00
|
|
|
ret = io_do_iopoll(ctx, !min);
|
|
|
|
if (ret < 0)
|
|
|
|
break;
|
|
|
|
nr_events += ret;
|
|
|
|
ret = 0;
|
|
|
|
} while (nr_events < min && !need_resched());
|
2022-03-22 17:07:58 +03:00
|
|
|
|
2019-01-09 18:59:42 +03:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-10-17 18:20:46 +03:00
|
|
|
static void kiocb_end_write(struct io_kiocb *req)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2019-10-17 18:20:46 +03:00
|
|
|
/*
|
|
|
|
* Tell lockdep we inherited freeze protection from submission
|
|
|
|
* thread.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_ISREG) {
|
2021-03-22 04:58:31 +03:00
|
|
|
struct super_block *sb = file_inode(req->file)->i_sb;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2021-03-22 04:58:31 +03:00
|
|
|
__sb_writers_acquired(sb, SB_FREEZE_WRITE);
|
|
|
|
sb_end_write(sb);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-04 20:28:00 +03:00
|
|
|
#ifdef CONFIG_BLOCK
|
2021-01-19 16:32:35 +03:00
|
|
|
static bool io_resubmit_prep(struct io_kiocb *req)
|
2020-06-04 20:28:00 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_async_rw *io = req->async_data;
|
2020-06-04 20:28:00 +03:00
|
|
|
|
2021-10-04 22:02:56 +03:00
|
|
|
if (!req_has_async_data(req))
|
2021-03-22 04:58:33 +03:00
|
|
|
return !io_req_prep_async(req);
|
2022-06-13 15:57:44 +03:00
|
|
|
iov_iter_restore(&io->s.iter, &io->s.iter_state);
|
2021-03-22 04:58:33 +03:00
|
|
|
return true;
|
2020-06-04 20:28:00 +03:00
|
|
|
}
|
|
|
|
|
2021-03-01 23:56:00 +03:00
|
|
|
static bool io_rw_should_reissue(struct io_kiocb *req)
|
2020-06-04 20:28:00 +03:00
|
|
|
{
|
2020-09-02 18:30:31 +03:00
|
|
|
umode_t mode = file_inode(req->file)->i_mode;
|
2021-03-01 23:56:00 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-06-04 20:28:00 +03:00
|
|
|
|
2020-09-02 18:30:31 +03:00
|
|
|
if (!S_ISBLK(mode) && !S_ISREG(mode))
|
|
|
|
return false;
|
2021-03-01 23:56:00 +03:00
|
|
|
if ((req->flags & REQ_F_NOWAIT) || (io_wq_current_is_worker() &&
|
|
|
|
!(ctx->flags & IORING_SETUP_IOPOLL)))
|
2020-06-04 20:28:00 +03:00
|
|
|
return false;
|
2021-02-24 05:17:35 +03:00
|
|
|
/*
|
|
|
|
* If ref is dying, we might be running poll reap from the exit work.
|
|
|
|
* Don't attempt to reissue from that path, just let it fail with
|
|
|
|
* -EAGAIN.
|
|
|
|
*/
|
2021-03-01 23:56:00 +03:00
|
|
|
if (percpu_ref_is_dying(&ctx->refs))
|
|
|
|
return false;
|
2021-07-27 19:50:31 +03:00
|
|
|
/*
|
|
|
|
* Play it safe and assume not safe to re-import and reissue if we're
|
|
|
|
* not in the original thread group (or in task context).
|
|
|
|
*/
|
|
|
|
if (!same_thread_group(req->task, current) || !in_task())
|
|
|
|
return false;
|
2021-03-01 23:56:00 +03:00
|
|
|
return true;
|
|
|
|
}
|
2021-04-03 04:45:34 +03:00
|
|
|
#else
|
2021-04-12 15:40:02 +03:00
|
|
|
static bool io_resubmit_prep(struct io_kiocb *req)
|
2021-04-03 04:45:34 +03:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
static bool io_rw_should_reissue(struct io_kiocb *req)
|
2021-03-01 23:56:00 +03:00
|
|
|
{
|
2020-06-04 20:28:00 +03:00
|
|
|
return false;
|
|
|
|
}
|
2021-03-01 23:56:00 +03:00
|
|
|
#endif
|
2020-06-04 20:28:00 +03:00
|
|
|
|
2021-08-11 00:15:25 +03:00
|
|
|
static bool __io_complete_rw_common(struct io_kiocb *req, long res)
|
2020-06-22 20:09:46 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
|
|
|
|
|
|
|
if (rw->kiocb.ki_flags & IOCB_WRITE) {
|
2021-03-22 04:45:59 +03:00
|
|
|
kiocb_end_write(req);
|
2022-03-20 22:08:38 +03:00
|
|
|
fsnotify_modify(req->file);
|
|
|
|
} else {
|
|
|
|
fsnotify_access(req->file);
|
|
|
|
}
|
2022-04-12 17:09:43 +03:00
|
|
|
if (unlikely(res != req->cqe.res)) {
|
2021-03-22 04:58:34 +03:00
|
|
|
if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
|
|
|
|
io_rw_should_reissue(req)) {
|
2022-06-20 15:39:27 +03:00
|
|
|
req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
|
2021-08-11 00:15:25 +03:00
|
|
|
return true;
|
2021-03-22 04:58:34 +03:00
|
|
|
}
|
2021-05-17 00:58:05 +03:00
|
|
|
req_set_fail(req);
|
2022-04-12 17:09:43 +03:00
|
|
|
req->cqe.res = res;
|
2021-03-22 04:58:34 +03:00
|
|
|
}
|
2021-08-11 00:15:25 +03:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2022-05-25 17:57:27 +03:00
|
|
|
inline void io_req_task_complete(struct io_kiocb *req, bool *locked)
|
2021-08-11 00:15:25 +03:00
|
|
|
{
|
2021-08-18 14:42:47 +03:00
|
|
|
if (*locked) {
|
2022-05-25 00:21:00 +03:00
|
|
|
req->cqe.flags |= io_put_kbuf(req, 0);
|
2022-05-24 21:45:38 +03:00
|
|
|
req->flags |= REQ_F_COMPLETE_INLINE;
|
2021-10-04 22:02:48 +03:00
|
|
|
io_req_add_compl_list(req);
|
2021-08-18 14:42:47 +03:00
|
|
|
} else {
|
2022-05-25 00:21:00 +03:00
|
|
|
req->cqe.flags |= io_put_kbuf(req, IO_URING_F_UNLOCKED);
|
|
|
|
io_req_complete_post(req);
|
2021-08-18 14:42:47 +03:00
|
|
|
}
|
2021-08-11 00:15:25 +03:00
|
|
|
}
|
|
|
|
|
2022-01-05 13:12:02 +03:00
|
|
|
static void __io_complete_rw(struct io_kiocb *req, long res,
|
2021-08-11 00:15:25 +03:00
|
|
|
unsigned int issue_flags)
|
|
|
|
{
|
|
|
|
if (__io_complete_rw_common(req, res))
|
|
|
|
return;
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, req->cqe.res, io_put_kbuf(req, issue_flags));
|
|
|
|
__io_req_complete(req, issue_flags);
|
2019-09-28 20:36:45 +03:00
|
|
|
}
|
|
|
|
|
2021-10-21 18:22:35 +03:00
|
|
|
static void io_complete_rw(struct kiocb *kiocb, long res)
|
2019-09-28 20:36:45 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = container_of(kiocb, struct io_rw, kiocb);
|
|
|
|
struct io_kiocb *req = cmd_to_io_kiocb(rw);
|
2019-09-28 20:36:45 +03:00
|
|
|
|
2021-08-11 00:15:25 +03:00
|
|
|
if (__io_complete_rw_common(req, res))
|
|
|
|
return;
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, res, 0);
|
2021-08-11 00:15:25 +03:00
|
|
|
req->io_task_work.func = io_req_task_complete;
|
2022-05-21 18:17:05 +03:00
|
|
|
io_req_task_prio_work_add(req);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-10-21 18:22:35 +03:00
|
|
|
static void io_complete_rw_iopoll(struct kiocb *kiocb, long res)
|
2019-01-09 18:59:42 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = container_of(kiocb, struct io_rw, kiocb);
|
|
|
|
struct io_kiocb *req = cmd_to_io_kiocb(rw);
|
2019-01-09 18:59:42 +03:00
|
|
|
|
2019-10-17 18:20:46 +03:00
|
|
|
if (kiocb->ki_flags & IOCB_WRITE)
|
|
|
|
kiocb_end_write(req);
|
2022-04-12 17:09:43 +03:00
|
|
|
if (unlikely(res != req->cqe.res)) {
|
2021-09-15 13:00:05 +03:00
|
|
|
if (res == -EAGAIN && io_rw_should_reissue(req)) {
|
2022-06-20 15:39:27 +03:00
|
|
|
req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;
|
2021-09-15 13:00:05 +03:00
|
|
|
return;
|
2021-03-22 04:58:34 +03:00
|
|
|
}
|
2022-04-12 17:09:43 +03:00
|
|
|
req->cqe.res = res;
|
2021-03-22 04:58:32 +03:00
|
|
|
}
|
2020-06-15 21:06:38 +03:00
|
|
|
|
2021-09-24 23:59:51 +03:00
|
|
|
/* order with io_iopoll_complete() checking ->iopoll_completed */
|
|
|
|
smp_store_release(&req->iopoll_completed, 1);
|
2019-01-09 18:59:42 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* After the iocb has been issued, it's safe to be found on the poll list.
|
|
|
|
* Adding the kiocb to the list AFTER submission ensures that we don't
|
2021-04-13 04:58:46 +03:00
|
|
|
* find it from a io_do_iopoll() thread before the issuer is done
|
2019-01-09 18:59:42 +03:00
|
|
|
* accessing the kiocb cookie.
|
|
|
|
*/
|
2021-10-15 19:09:12 +03:00
|
|
|
static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_flags)
|
2019-01-09 18:59:42 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-10-18 16:34:31 +03:00
|
|
|
const bool needs_lock = issue_flags & IO_URING_F_UNLOCKED;
|
2021-06-14 04:36:14 +03:00
|
|
|
|
|
|
|
/* workqueue context doesn't hold uring_lock, grab it now */
|
2021-10-18 16:34:31 +03:00
|
|
|
if (unlikely(needs_lock))
|
2021-06-14 04:36:14 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2019-01-09 18:59:42 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Track whether we have multiple files in our lists. This will impact
|
|
|
|
* how we do polling eventually, not spinning if we're on potentially
|
|
|
|
* different devices.
|
|
|
|
*/
|
2021-09-24 23:59:49 +03:00
|
|
|
if (wq_list_empty(&ctx->iopoll_list)) {
|
2021-06-28 00:37:30 +03:00
|
|
|
ctx->poll_multi_queue = false;
|
|
|
|
} else if (!ctx->poll_multi_queue) {
|
2019-01-09 18:59:42 +03:00
|
|
|
struct io_kiocb *list_req;
|
|
|
|
|
2021-09-24 23:59:49 +03:00
|
|
|
list_req = container_of(ctx->iopoll_list.first, struct io_kiocb,
|
|
|
|
comp_list);
|
2021-10-12 14:12:14 +03:00
|
|
|
if (list_req->file != req->file)
|
2021-06-28 00:37:30 +03:00
|
|
|
ctx->poll_multi_queue = true;
|
2019-01-09 18:59:42 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For fast devices, IO may have already completed. If it has, add
|
|
|
|
* it to the front so we find it first.
|
|
|
|
*/
|
io_uring: fix io_kiocb.flags modification race in IOPOLL mode
While testing io_uring in arm, we found sometimes io_sq_thread() keeps
polling io requests even though there are not inflight io requests in
block layer. After some investigations, found a possible race about
io_kiocb.flags, see below race codes:
1) in the end of io_write() or io_read()
req->flags &= ~REQ_F_NEED_CLEANUP;
kfree(iovec);
return ret;
2) in io_complete_rw_iopoll()
if (res != -EAGAIN)
req->flags |= REQ_F_IOPOLL_COMPLETED;
In IOPOLL mode, io requests still maybe completed by interrupt, then
above codes are not safe, concurrent modifications to req->flags, which
is not protected by lock or is not atomic modifications. I also had
disassemble io_complete_rw_iopoll() in arm:
req->flags |= REQ_F_IOPOLL_COMPLETED;
0xffff000008387b18 <+76>: ldr w0, [x19,#104]
0xffff000008387b1c <+80>: orr w0, w0, #0x1000
0xffff000008387b20 <+84>: str w0, [x19,#104]
Seems that the "req->flags |= REQ_F_IOPOLL_COMPLETED;" is load and
modification, two instructions, which obviously is not atomic.
To fix this issue, add a new iopoll_completed in io_kiocb to indicate
whether io request is completed.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-11 18:39:36 +03:00
|
|
|
if (READ_ONCE(req->iopoll_completed))
|
2021-09-24 23:59:49 +03:00
|
|
|
wq_list_add_head(&req->comp_list, &ctx->iopoll_list);
|
2019-01-09 18:59:42 +03:00
|
|
|
else
|
2021-09-24 23:59:49 +03:00
|
|
|
wq_list_add_tail(&req->comp_list, &ctx->iopoll_list);
|
io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL
After making ext4 support iopoll method:
let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
we found fio can easily hang in fio_ioring_getevents() with below fio
job:
rm -f testfile; sync;
sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
-rw=write -ioengine=io_uring -hipri=1 -sqthread_poll=1 -direct=1
-bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.
There are two issues that results in this hang, one reason is that
when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
does not use io_uring_enter to get completed events, it relies on
kernel io_sq_thread to poll for completed events.
Another reason is that there is a race: when io_submit_sqes() in
io_sq_thread() submits a batch of sqes, variable 'inflight' will
record the number of submitted reqs, then io_sq_thread will poll for
reqs which have been added to poll_list. But note, if some previous
reqs have been punted to io worker, these reqs will won't be in
poll_list timely. io_sq_thread() will only poll for a part of previous
submitted reqs, and then find poll_list is empty, reset variable
'inflight' to be zero. If app just waits these deferred reqs and does
not wake up io_sq_thread again, then hang happens.
For app that entirely relies on io_sq_thread to poll completed requests,
let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
element to poll_list, and when io_sq_thread prepares to sleep, check
whether poll_list is empty again, if not empty, continue to poll.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-25 17:12:08 +03:00
|
|
|
|
2021-10-18 16:34:31 +03:00
|
|
|
if (unlikely(needs_lock)) {
|
2021-06-14 04:36:14 +03:00
|
|
|
/*
|
|
|
|
* If IORING_SETUP_SQPOLL is enabled, sqes are either handle
|
|
|
|
* in sq thread task context or in io worker task context. If
|
|
|
|
* current task context is sq thread, we don't need to check
|
|
|
|
* whether should wake up sq thread.
|
|
|
|
*/
|
|
|
|
if ((ctx->flags & IORING_SETUP_SQPOLL) &&
|
|
|
|
wq_has_sleeper(&ctx->sq_data->wait))
|
|
|
|
wake_up(&ctx->sq_data->wait);
|
|
|
|
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
2019-01-09 18:59:42 +03:00
|
|
|
}
|
|
|
|
|
2020-06-01 19:00:27 +03:00
|
|
|
static bool io_bdev_nowait(struct block_device *bdev)
|
|
|
|
{
|
2020-10-19 11:59:42 +03:00
|
|
|
return !bdev || blk_queue_nowait(bdev_get_queue(bdev));
|
2020-06-01 19:00:27 +03:00
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
/*
|
|
|
|
* If we tracked the file through the SCM inflight mechanism, we could support
|
|
|
|
* any file. For now, just ensure that anything potentially problematic is done
|
|
|
|
* inline.
|
|
|
|
*/
|
2021-10-17 02:07:10 +03:00
|
|
|
static bool __io_file_supports_nowait(struct file *file, umode_t mode)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2020-06-01 19:00:27 +03:00
|
|
|
if (S_ISBLK(mode)) {
|
2020-11-23 15:38:40 +03:00
|
|
|
if (IS_ENABLED(CONFIG_BLOCK) &&
|
|
|
|
io_bdev_nowait(I_BDEV(file->f_mapping->host)))
|
2020-06-01 19:00:27 +03:00
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
2021-06-09 14:07:25 +03:00
|
|
|
if (S_ISSOCK(mode))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return true;
|
2020-06-01 19:00:27 +03:00
|
|
|
if (S_ISREG(mode)) {
|
2020-11-23 15:38:40 +03:00
|
|
|
if (IS_ENABLED(CONFIG_BLOCK) &&
|
|
|
|
io_bdev_nowait(file->f_inode->i_sb->s_bdev) &&
|
2022-05-25 19:28:04 +03:00
|
|
|
!io_is_uring_fops(file))
|
2020-06-01 19:00:27 +03:00
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2020-06-10 04:23:05 +03:00
|
|
|
/* any ->read/write should understand O_NONBLOCK */
|
|
|
|
if (file->f_flags & O_NONBLOCK)
|
|
|
|
return true;
|
2021-10-17 02:07:09 +03:00
|
|
|
return file->f_mode & FMODE_NOWAIT;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
2020-06-10 04:23:05 +03:00
|
|
|
|
2021-10-17 02:07:10 +03:00
|
|
|
/*
|
|
|
|
* If we tracked the file through the SCM inflight mechanism, we could support
|
|
|
|
* any file. For now, just ensure that anything potentially problematic is done
|
|
|
|
* inline.
|
|
|
|
*/
|
2022-05-25 19:40:19 +03:00
|
|
|
unsigned int io_file_get_flags(struct file *file)
|
2021-10-17 02:07:10 +03:00
|
|
|
{
|
|
|
|
umode_t mode = file_inode(file)->i_mode;
|
|
|
|
unsigned int res = 0;
|
2020-04-28 22:15:06 +03:00
|
|
|
|
2021-10-17 02:07:10 +03:00
|
|
|
if (S_ISREG(mode))
|
|
|
|
res |= FFS_ISREG;
|
|
|
|
if (__io_file_supports_nowait(file, mode))
|
|
|
|
res |= FFS_NOWAIT;
|
2022-04-21 01:15:27 +03:00
|
|
|
if (io_file_need_scm(file))
|
|
|
|
res |= FFS_SCM;
|
2021-10-17 02:07:10 +03:00
|
|
|
return res;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-10-17 02:07:09 +03:00
|
|
|
static inline bool io_file_supports_nowait(struct io_kiocb *req)
|
2021-03-12 18:30:14 +03:00
|
|
|
{
|
2021-10-17 02:07:10 +03:00
|
|
|
return req->flags & REQ_F_SUPPORT_NOWAIT;
|
2021-03-12 18:30:14 +03:00
|
|
|
}
|
|
|
|
|
2021-10-23 14:14:01 +03:00
|
|
|
static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2019-03-13 21:39:28 +03:00
|
|
|
unsigned ioprio;
|
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->kiocb.ki_pos = READ_ONCE(sqe->off);
|
2022-06-09 10:34:35 +03:00
|
|
|
/* used for fixed read/write too - just read unconditionally */
|
|
|
|
req->buf_index = READ_ONCE(sqe->buf_index);
|
|
|
|
|
|
|
|
if (req->opcode == IORING_OP_READ_FIXED ||
|
|
|
|
req->opcode == IORING_OP_WRITE_FIXED) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
u16 index;
|
|
|
|
|
|
|
|
if (unlikely(req->buf_index >= ctx->nr_user_bufs))
|
|
|
|
return -EFAULT;
|
|
|
|
index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
|
|
|
|
req->imu = ctx->user_bufs[index];
|
|
|
|
io_req_set_rsrc_node(req, ctx, 0);
|
|
|
|
}
|
2019-12-20 18:45:55 +03:00
|
|
|
|
2021-10-23 14:14:02 +03:00
|
|
|
ioprio = READ_ONCE(sqe->ioprio);
|
|
|
|
if (ioprio) {
|
|
|
|
ret = ioprio_check_cap(ioprio);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->kiocb.ki_ioprio = ioprio;
|
2021-10-23 14:14:02 +03:00
|
|
|
} else {
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->kiocb.ki_ioprio = get_current_ioprio();
|
2021-04-25 16:32:24 +03:00
|
|
|
}
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->addr = READ_ONCE(sqe->addr);
|
|
|
|
rw->len = READ_ONCE(sqe->len);
|
|
|
|
rw->flags = READ_ONCE(sqe->rw_flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-05-24 19:26:28 +03:00
|
|
|
static void io_readv_writev_cleanup(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_async_rw *io = req->async_data;
|
|
|
|
|
|
|
|
kfree(io->free_iovec);
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
|
|
|
|
{
|
|
|
|
switch (ret) {
|
|
|
|
case -EIOCBQUEUED:
|
|
|
|
break;
|
|
|
|
case -ERESTARTSYS:
|
|
|
|
case -ERESTARTNOINTR:
|
|
|
|
case -ERESTARTNOHAND:
|
|
|
|
case -ERESTART_RESTARTBLOCK:
|
|
|
|
/*
|
|
|
|
* We can't just restart the syscall, since previously
|
|
|
|
* submitted sqes may already be in progress. Just fail this
|
|
|
|
* IO with EINTR.
|
|
|
|
*/
|
|
|
|
ret = -EINTR;
|
2020-08-24 01:36:59 +03:00
|
|
|
fallthrough;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
default:
|
2021-10-21 18:22:35 +03:00
|
|
|
kiocb->ki_complete(kiocb, ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-02-22 13:55:03 +03:00
|
|
|
static inline loff_t *io_kiocb_update_pos(struct io_kiocb *req)
|
2022-02-22 13:55:02 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2022-02-22 13:55:02 +03:00
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
if (rw->kiocb.ki_pos != -1)
|
|
|
|
return &rw->kiocb.ki_pos;
|
2022-04-11 18:48:30 +03:00
|
|
|
|
|
|
|
if (!(req->file->f_mode & FMODE_STREAM)) {
|
|
|
|
req->flags |= REQ_F_CUR_POS;
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->kiocb.ki_pos = req->file->f_pos;
|
|
|
|
return &rw->kiocb.ki_pos;
|
2022-02-22 13:55:02 +03:00
|
|
|
}
|
2022-04-11 18:48:30 +03:00
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->kiocb.ki_pos = 0;
|
2022-04-11 18:48:30 +03:00
|
|
|
return NULL;
|
2022-02-22 13:55:02 +03:00
|
|
|
}
|
|
|
|
|
2021-11-23 03:07:49 +03:00
|
|
|
static void kiocb_done(struct io_kiocb *req, ssize_t ret,
|
2021-02-10 03:03:09 +03:00
|
|
|
unsigned int issue_flags)
|
2019-09-28 20:36:45 +03:00
|
|
|
{
|
2020-08-16 04:44:09 +03:00
|
|
|
struct io_async_rw *io = req->async_data;
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2019-12-26 02:33:42 +03:00
|
|
|
|
2020-08-13 20:51:40 +03:00
|
|
|
/* add previously done IO, if any */
|
2021-10-04 22:02:56 +03:00
|
|
|
if (req_has_async_data(req) && io->bytes_done > 0) {
|
2020-08-13 20:51:40 +03:00
|
|
|
if (ret < 0)
|
2020-08-16 04:44:09 +03:00
|
|
|
ret = io->bytes_done;
|
2020-08-13 20:51:40 +03:00
|
|
|
else
|
2020-08-16 04:44:09 +03:00
|
|
|
ret += io->bytes_done;
|
2020-08-13 20:51:40 +03:00
|
|
|
}
|
|
|
|
|
2019-12-26 02:33:42 +03:00
|
|
|
if (req->flags & REQ_F_CUR_POS)
|
2022-06-13 15:57:44 +03:00
|
|
|
req->file->f_pos = rw->kiocb.ki_pos;
|
|
|
|
if (ret >= 0 && (rw->kiocb.ki_complete == io_complete_rw))
|
2022-01-05 13:12:02 +03:00
|
|
|
__io_complete_rw(req, ret, issue_flags);
|
2019-09-28 20:36:45 +03:00
|
|
|
else
|
2022-06-13 15:57:44 +03:00
|
|
|
io_rw_done(&rw->kiocb, ret);
|
2021-04-08 21:28:03 +03:00
|
|
|
|
2021-09-15 13:00:05 +03:00
|
|
|
if (req->flags & REQ_F_REISSUE) {
|
2021-04-08 21:28:03 +03:00
|
|
|
req->flags &= ~REQ_F_REISSUE;
|
2022-03-17 05:24:47 +03:00
|
|
|
if (io_resubmit_prep(req))
|
2021-07-27 19:25:55 +03:00
|
|
|
io_req_task_queue_reissue(req);
|
2022-03-17 05:24:47 +03:00
|
|
|
else
|
|
|
|
io_req_task_queue_fail(req, ret);
|
2021-04-08 21:28:03 +03:00
|
|
|
}
|
2019-09-28 20:36:45 +03:00
|
|
|
}
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
static int __io_import_fixed(struct io_kiocb *req, int ddir,
|
|
|
|
struct iov_iter *iter, struct io_mapped_ubuf *imu)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
|
|
|
size_t len = rw->len;
|
|
|
|
u64 buf_end, buf_addr = rw->addr;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
size_t offset;
|
|
|
|
|
2021-04-01 17:43:54 +03:00
|
|
|
if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end)))
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
return -EFAULT;
|
|
|
|
/* not inside the mapped region */
|
2021-04-01 17:43:55 +03:00
|
|
|
if (unlikely(buf_addr < imu->ubuf || buf_end > imu->ubuf_end))
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* May not be a start of buffer, set size appropriately
|
|
|
|
* and advance us to the beginning.
|
|
|
|
*/
|
|
|
|
offset = buf_addr - imu->ubuf;
|
2022-06-13 15:57:44 +03:00
|
|
|
iov_iter_bvec(iter, ddir, imu->bvec, imu->nr_bvecs, offset + len);
|
io_uring: don't use iov_iter_advance() for fixed buffers
Hrvoje reports that when a large fixed buffer is registered and IO is
being done to the latter pages of said buffer, the IO submission time
is much worse:
reading to the start of the buffer: 11238 ns
reading to the end of the buffer: 1039879 ns
In fact, it's worse by two orders of magnitude. The reason for that is
how io_uring figures out how to setup the iov_iter. We point the iter
at the first bvec, and then use iov_iter_advance() to fast-forward to
the offset within that buffer we need.
However, that is abysmally slow, as it entails iterating the bvecs
that we setup as part of buffer registration. There's really no need
to use this generic helper, as we know it's a BVEC type iterator, and
we also know that each bvec is PAGE_SIZE in size, apart from possibly
the first and last. Hence we can just use a shift on the offset to
find the right index, and then adjust the iov_iter appropriately.
After this fix, the timings are:
reading to the start of the buffer: 10135 ns
reading to the end of the buffer: 1377 ns
Or about an 755x improvement for the tail page.
Reported-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Tested-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-20 17:37:31 +03:00
|
|
|
|
|
|
|
if (offset) {
|
|
|
|
/*
|
|
|
|
* Don't use iov_iter_advance() here, as it's really slow for
|
|
|
|
* using the latter parts of a big fixed buffer - it iterates
|
|
|
|
* over each segment manually. We can cheat a bit here, because
|
|
|
|
* we know that:
|
|
|
|
*
|
|
|
|
* 1) it's a BVEC iter, we set it up
|
|
|
|
* 2) all bvecs are PAGE_SIZE in size, except potentially the
|
|
|
|
* first and last bvec
|
|
|
|
*
|
|
|
|
* So just find our index, and adjust the iterator afterwards.
|
|
|
|
* If the offset is within the first bvec (or the whole first
|
|
|
|
* bvec, just use iov_iter_advance(). This makes it easier
|
|
|
|
* since we can just skip the first segment, which may not
|
|
|
|
* be PAGE_SIZE aligned.
|
|
|
|
*/
|
|
|
|
const struct bio_vec *bvec = imu->bvec;
|
|
|
|
|
|
|
|
if (offset <= bvec->bv_len) {
|
|
|
|
iov_iter_advance(iter, offset);
|
|
|
|
} else {
|
|
|
|
unsigned long seg_skip;
|
|
|
|
|
|
|
|
/* skip first vec */
|
|
|
|
offset -= bvec->bv_len;
|
|
|
|
seg_skip = 1 + (offset >> PAGE_SHIFT);
|
|
|
|
|
|
|
|
iter->bvec = bvec + seg_skip;
|
|
|
|
iter->nr_segs -= seg_skip;
|
2019-08-15 15:03:22 +03:00
|
|
|
iter->count -= bvec->bv_len + offset;
|
io_uring: don't use iov_iter_advance() for fixed buffers
Hrvoje reports that when a large fixed buffer is registered and IO is
being done to the latter pages of said buffer, the IO submission time
is much worse:
reading to the start of the buffer: 11238 ns
reading to the end of the buffer: 1039879 ns
In fact, it's worse by two orders of magnitude. The reason for that is
how io_uring figures out how to setup the iov_iter. We point the iter
at the first bvec, and then use iov_iter_advance() to fast-forward to
the offset within that buffer we need.
However, that is abysmally slow, as it entails iterating the bvecs
that we setup as part of buffer registration. There's really no need
to use this generic helper, as we know it's a BVEC type iterator, and
we also know that each bvec is PAGE_SIZE in size, apart from possibly
the first and last. Hence we can just use a shift on the offset to
find the right index, and then adjust the iov_iter appropriately.
After this fix, the timings are:
reading to the start of the buffer: 10135 ns
reading to the end of the buffer: 1377 ns
Or about an 755x improvement for the tail page.
Reported-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Tested-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-20 17:37:31 +03:00
|
|
|
iter->iov_offset = offset & ~PAGE_MASK;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-04 16:52:06 +03:00
|
|
|
return 0;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
}
|
|
|
|
|
2022-04-05 02:18:43 +03:00
|
|
|
static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
|
|
|
|
unsigned int issue_flags)
|
2021-04-25 16:32:24 +03:00
|
|
|
{
|
2022-06-09 10:34:35 +03:00
|
|
|
if (WARN_ON_ONCE(!req->imu))
|
|
|
|
return -EFAULT;
|
|
|
|
return __io_import_fixed(req, rw, iter, req->imu);
|
2021-04-25 16:32:24 +03:00
|
|
|
}
|
|
|
|
|
2022-05-01 19:52:44 +03:00
|
|
|
static int io_buffer_add_list(struct io_ring_ctx *ctx,
|
|
|
|
struct io_buffer_list *bl, unsigned int bgid)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
{
|
2022-03-18 02:20:10 +03:00
|
|
|
bl->bgid = bgid;
|
2022-05-01 19:52:44 +03:00
|
|
|
if (bgid < BGID_ARRAY)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return xa_err(xa_store(&ctx->io_bl_xa, bgid, bl, GFP_KERNEL));
|
2022-03-18 02:20:10 +03:00
|
|
|
}
|
|
|
|
|
2022-04-30 21:22:02 +03:00
|
|
|
static void __user *io_provided_buffer_select(struct io_kiocb *req, size_t *len,
|
2022-05-18 11:40:01 +03:00
|
|
|
struct io_buffer_list *bl)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
{
|
2022-05-15 20:19:43 +03:00
|
|
|
if (!list_empty(&bl->buf_list)) {
|
|
|
|
struct io_buffer *kbuf;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
|
2022-03-18 02:20:10 +03:00
|
|
|
kbuf = list_first_entry(&bl->buf_list, struct io_buffer, list);
|
|
|
|
list_del(&kbuf->list);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
if (*len > kbuf->len)
|
|
|
|
*len = kbuf->len;
|
2021-10-01 20:07:03 +03:00
|
|
|
req->flags |= REQ_F_BUFFER_SELECTED;
|
|
|
|
req->kbuf = kbuf;
|
2022-05-15 20:19:43 +03:00
|
|
|
req->buf_index = kbuf->bid;
|
2022-05-18 11:40:01 +03:00
|
|
|
return u64_to_user_ptr(kbuf->addr);
|
2022-05-15 20:19:43 +03:00
|
|
|
}
|
2022-05-18 11:40:01 +03:00
|
|
|
return NULL;
|
2022-04-30 21:22:02 +03:00
|
|
|
}
|
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
|
|
|
|
struct io_buffer_list *bl,
|
|
|
|
unsigned int issue_flags)
|
|
|
|
{
|
|
|
|
struct io_uring_buf_ring *br = bl->buf_ring;
|
|
|
|
struct io_uring_buf *buf;
|
2022-06-13 13:11:56 +03:00
|
|
|
__u16 head = bl->head;
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
|
2022-06-12 16:31:38 +03:00
|
|
|
if (unlikely(smp_load_acquire(&br->tail) == head))
|
2022-05-18 11:40:01 +03:00
|
|
|
return NULL;
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
|
|
|
|
head &= bl->mask;
|
|
|
|
if (head < IO_BUFFER_LIST_BUF_PER_PAGE) {
|
|
|
|
buf = &br->bufs[head];
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
} else {
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
int off = head & (IO_BUFFER_LIST_BUF_PER_PAGE - 1);
|
2022-06-13 13:11:55 +03:00
|
|
|
int index = head / IO_BUFFER_LIST_BUF_PER_PAGE;
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
buf = page_address(bl->buf_pages[index]);
|
|
|
|
buf += off;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
}
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (*len > buf->len)
|
|
|
|
*len = buf->len;
|
|
|
|
req->flags |= REQ_F_BUFFER_RING;
|
|
|
|
req->buf_list = bl;
|
|
|
|
req->buf_index = buf->bid;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
|
2022-06-16 04:51:11 +03:00
|
|
|
if (issue_flags & IO_URING_F_UNLOCKED || !file_can_poll(req->file)) {
|
2022-05-18 11:40:01 +03:00
|
|
|
/*
|
|
|
|
* If we came in unlocked, we have no choice but to consume the
|
|
|
|
* buffer here. This does mean it'll be pinned until the IO
|
|
|
|
* completes. But coming in unlocked means we're in io-wq
|
|
|
|
* context, hence there should be no further retry. For the
|
|
|
|
* locked case, the caller must ensure to call the commit when
|
|
|
|
* the transfer completes (or if we get -EAGAIN and must poll
|
|
|
|
* or retry).
|
|
|
|
*/
|
|
|
|
req->buf_list = NULL;
|
|
|
|
bl->head++;
|
|
|
|
}
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
return u64_to_user_ptr(buf->addr);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 15:25:13 +03:00
|
|
|
void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
|
|
|
|
unsigned int issue_flags)
|
2020-02-27 17:31:19 +03:00
|
|
|
{
|
2022-03-18 02:20:10 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_buffer_list *bl;
|
2022-05-18 11:40:01 +03:00
|
|
|
void __user *ret = NULL;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
|
2022-03-25 14:52:14 +03:00
|
|
|
io_ring_submit_lock(req->ctx, issue_flags);
|
2020-02-27 17:31:19 +03:00
|
|
|
|
2022-04-29 04:09:43 +03:00
|
|
|
bl = io_buffer_get_list(ctx, req->buf_index);
|
2022-05-18 11:40:01 +03:00
|
|
|
if (likely(bl)) {
|
|
|
|
if (bl->buf_nr_pages)
|
|
|
|
ret = io_ring_buffer_select(req, len, bl, issue_flags);
|
|
|
|
else
|
|
|
|
ret = io_provided_buffer_select(req, len, bl);
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
}
|
2022-05-18 11:40:01 +03:00
|
|
|
io_ring_submit_unlock(req->ctx, issue_flags);
|
|
|
|
return ret;
|
2020-02-27 17:31:19 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
static ssize_t io_compat_import(struct io_kiocb *req, struct iovec *iov,
|
2021-10-14 18:10:17 +03:00
|
|
|
unsigned int issue_flags)
|
2020-02-27 17:31:19 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2020-02-27 17:31:19 +03:00
|
|
|
struct compat_iovec __user *uiov;
|
|
|
|
compat_ssize_t clen;
|
|
|
|
void __user *buf;
|
2022-04-28 23:02:49 +03:00
|
|
|
size_t len;
|
2020-02-27 17:31:19 +03:00
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
uiov = u64_to_user_ptr(rw->addr);
|
2020-02-27 17:31:19 +03:00
|
|
|
if (!access_ok(uiov, sizeof(*uiov)))
|
|
|
|
return -EFAULT;
|
|
|
|
if (__get_user(clen, &uiov->iov_len))
|
|
|
|
return -EFAULT;
|
|
|
|
if (clen < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
len = clen;
|
2022-04-29 04:09:43 +03:00
|
|
|
buf = io_buffer_select(req, &len, issue_flags);
|
2022-05-18 11:40:01 +03:00
|
|
|
if (!buf)
|
|
|
|
return -ENOBUFS;
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->addr = (unsigned long) buf;
|
2020-02-27 17:31:19 +03:00
|
|
|
iov[0].iov_base = buf;
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->len = iov[0].iov_len = (compat_size_t) len;
|
2020-02-27 17:31:19 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static ssize_t __io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
|
2021-10-14 18:10:17 +03:00
|
|
|
unsigned int issue_flags)
|
2020-02-27 17:31:19 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
|
|
|
struct iovec __user *uiov = u64_to_user_ptr(rw->addr);
|
2020-02-27 17:31:19 +03:00
|
|
|
void __user *buf;
|
|
|
|
ssize_t len;
|
|
|
|
|
|
|
|
if (copy_from_user(iov, uiov, sizeof(*uiov)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
len = iov[0].iov_len;
|
|
|
|
if (len < 0)
|
|
|
|
return -EINVAL;
|
2022-04-29 04:09:43 +03:00
|
|
|
buf = io_buffer_select(req, &len, issue_flags);
|
2022-05-18 11:40:01 +03:00
|
|
|
if (!buf)
|
|
|
|
return -ENOBUFS;
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->addr = (unsigned long) buf;
|
2020-02-27 17:31:19 +03:00
|
|
|
iov[0].iov_base = buf;
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->len = iov[0].iov_len = len;
|
2020-02-27 17:31:19 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t io_iov_buffer_select(struct io_kiocb *req, struct iovec *iov,
|
2021-10-14 18:10:17 +03:00
|
|
|
unsigned int issue_flags)
|
2020-02-27 17:31:19 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)) {
|
2022-06-13 15:57:44 +03:00
|
|
|
iov[0].iov_base = u64_to_user_ptr(rw->addr);
|
|
|
|
iov[0].iov_len = rw->len;
|
2020-02-27 17:31:19 +03:00
|
|
|
return 0;
|
2020-06-04 20:27:01 +03:00
|
|
|
}
|
2022-06-13 15:57:44 +03:00
|
|
|
if (rw->len != 1)
|
2020-02-27 17:31:19 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (req->ctx->compat)
|
2021-10-14 18:10:17 +03:00
|
|
|
return io_compat_import(req, iov, issue_flags);
|
2020-02-27 17:31:19 +03:00
|
|
|
#endif
|
|
|
|
|
2021-10-14 18:10:17 +03:00
|
|
|
return __io_iov_buffer_select(req, iov, issue_flags);
|
2020-02-27 17:31:19 +03:00
|
|
|
}
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
static struct iovec *__io_import_iovec(int ddir, struct io_kiocb *req,
|
2021-10-15 19:09:14 +03:00
|
|
|
struct io_rw_state *s,
|
|
|
|
unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2021-10-14 18:10:18 +03:00
|
|
|
struct iov_iter *iter = &s->iter;
|
2021-02-04 16:52:06 +03:00
|
|
|
u8 opcode = req->opcode;
|
2021-10-15 19:09:14 +03:00
|
|
|
struct iovec *iovec;
|
2021-10-15 19:09:13 +03:00
|
|
|
void __user *buf;
|
|
|
|
size_t sqe_len;
|
2020-02-27 17:31:19 +03:00
|
|
|
ssize_t ret;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-11-23 03:07:48 +03:00
|
|
|
if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) {
|
2022-06-13 15:57:44 +03:00
|
|
|
ret = io_import_fixed(req, ddir, iter, issue_flags);
|
2021-11-23 03:07:48 +03:00
|
|
|
if (ret)
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
return NULL;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
buf = u64_to_user_ptr(rw->addr);
|
|
|
|
sqe_len = rw->len;
|
2019-12-20 18:45:55 +03:00
|
|
|
|
2019-12-23 01:19:35 +03:00
|
|
|
if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {
|
2022-04-30 21:16:40 +03:00
|
|
|
if (io_do_buffer_select(req)) {
|
2022-04-29 04:09:43 +03:00
|
|
|
buf = io_buffer_select(req, &sqe_len, issue_flags);
|
2022-05-18 11:40:01 +03:00
|
|
|
if (!buf)
|
|
|
|
return ERR_PTR(-ENOBUFS);
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->addr = (unsigned long) buf;
|
|
|
|
rw->len = sqe_len;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-24 02:42:51 +03:00
|
|
|
}
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
ret = import_single_range(ddir, buf, sqe_len, s->fast_iov, iter);
|
2021-11-23 03:07:48 +03:00
|
|
|
if (ret)
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
return NULL;
|
2019-12-23 01:19:35 +03:00
|
|
|
}
|
|
|
|
|
2021-10-15 19:09:14 +03:00
|
|
|
iovec = s->fast_iov;
|
2020-02-27 17:31:19 +03:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECT) {
|
2021-10-15 19:09:14 +03:00
|
|
|
ret = io_iov_buffer_select(req, iovec, issue_flags);
|
2021-11-23 03:07:48 +03:00
|
|
|
if (ret)
|
|
|
|
return ERR_PTR(ret);
|
2022-06-13 15:57:44 +03:00
|
|
|
iov_iter_init(iter, ddir, iovec, 1, iovec->iov_len);
|
2021-11-23 03:07:48 +03:00
|
|
|
return NULL;
|
2020-02-27 17:31:19 +03:00
|
|
|
}
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
ret = __import_iovec(ddir, buf, sqe_len, UIO_FASTIOV, &iovec, iter,
|
2020-09-25 07:51:41 +03:00
|
|
|
req->ctx->compat);
|
2021-10-15 19:09:14 +03:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
return iovec;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-10-14 18:10:18 +03:00
|
|
|
static inline int io_import_iovec(int rw, struct io_kiocb *req,
|
|
|
|
struct iovec **iovec, struct io_rw_state *s,
|
|
|
|
unsigned int issue_flags)
|
|
|
|
{
|
2021-10-15 19:09:14 +03:00
|
|
|
*iovec = __io_import_iovec(rw, req, s, issue_flags);
|
|
|
|
if (unlikely(IS_ERR(*iovec)))
|
|
|
|
return PTR_ERR(*iovec);
|
2021-10-14 18:10:18 +03:00
|
|
|
|
|
|
|
iov_iter_save_state(&s->iter, &s->iter_state);
|
2021-10-15 19:09:14 +03:00
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2020-08-26 19:36:20 +03:00
|
|
|
static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb)
|
|
|
|
{
|
2020-09-30 22:57:15 +03:00
|
|
|
return (kiocb->ki_filp->f_mode & FMODE_STREAM) ? NULL : &kiocb->ki_pos;
|
2020-08-26 19:36:20 +03:00
|
|
|
}
|
|
|
|
|
2019-01-19 08:56:34 +03:00
|
|
|
/*
|
2019-09-23 20:05:34 +03:00
|
|
|
* For files that don't have ->read_iter() and ->write_iter(), handle them
|
|
|
|
* by looping over ->read() or ->write() manually.
|
2019-01-19 08:56:34 +03:00
|
|
|
*/
|
2022-06-13 15:57:44 +03:00
|
|
|
static ssize_t loop_rw_iter(int ddir, struct io_rw *rw, struct iov_iter *iter)
|
2019-09-23 20:05:34 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct kiocb *kiocb = &rw->kiocb;
|
|
|
|
struct file *file = kiocb->ki_filp;
|
2019-09-23 20:05:34 +03:00
|
|
|
ssize_t ret = 0;
|
2022-02-22 13:55:01 +03:00
|
|
|
loff_t *ppos;
|
2019-09-23 20:05:34 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't support polled IO through this interface, and we can't
|
|
|
|
* support non-blocking either. For the latter, this just causes
|
|
|
|
* the kiocb to be handled from an async context.
|
|
|
|
*/
|
|
|
|
if (kiocb->ki_flags & IOCB_HIPRI)
|
|
|
|
return -EOPNOTSUPP;
|
2021-10-17 02:07:09 +03:00
|
|
|
if ((kiocb->ki_flags & IOCB_NOWAIT) &&
|
|
|
|
!(kiocb->ki_filp->f_flags & O_NONBLOCK))
|
2019-09-23 20:05:34 +03:00
|
|
|
return -EAGAIN;
|
|
|
|
|
2022-02-22 13:55:01 +03:00
|
|
|
ppos = io_kiocb_ppos(kiocb);
|
|
|
|
|
2019-09-23 20:05:34 +03:00
|
|
|
while (iov_iter_count(iter)) {
|
2019-11-24 11:58:24 +03:00
|
|
|
struct iovec iovec;
|
2019-09-23 20:05:34 +03:00
|
|
|
ssize_t nr;
|
|
|
|
|
2019-11-24 11:58:24 +03:00
|
|
|
if (!iov_iter_is_bvec(iter)) {
|
|
|
|
iovec = iov_iter_iovec(iter);
|
|
|
|
} else {
|
2022-06-13 15:57:44 +03:00
|
|
|
iovec.iov_base = u64_to_user_ptr(rw->addr);
|
|
|
|
iovec.iov_len = rw->len;
|
2019-11-24 11:58:24 +03:00
|
|
|
}
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
if (ddir == READ) {
|
2019-09-23 20:05:34 +03:00
|
|
|
nr = file->f_op->read(file, iovec.iov_base,
|
2022-02-22 13:55:01 +03:00
|
|
|
iovec.iov_len, ppos);
|
2019-09-23 20:05:34 +03:00
|
|
|
} else {
|
|
|
|
nr = file->f_op->write(file, iovec.iov_base,
|
2022-02-22 13:55:01 +03:00
|
|
|
iovec.iov_len, ppos);
|
2019-09-23 20:05:34 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (nr < 0) {
|
|
|
|
if (!ret)
|
|
|
|
ret = nr;
|
|
|
|
break;
|
|
|
|
}
|
2022-03-18 20:28:13 +03:00
|
|
|
ret += nr;
|
2021-09-12 15:45:07 +03:00
|
|
|
if (!iov_iter_is_bvec(iter)) {
|
|
|
|
iov_iter_advance(iter, nr);
|
|
|
|
} else {
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->addr += nr;
|
|
|
|
rw->len -= nr;
|
|
|
|
if (!rw->len)
|
2022-03-18 20:28:13 +03:00
|
|
|
break;
|
2021-09-12 15:45:07 +03:00
|
|
|
}
|
2019-09-23 20:05:34 +03:00
|
|
|
if (nr != iovec.iov_len)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-08-13 18:47:43 +03:00
|
|
|
static void io_req_map_rw(struct io_kiocb *req, const struct iovec *iovec,
|
|
|
|
const struct iovec *fast_iov, struct iov_iter *iter)
|
2019-12-02 21:03:47 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_async_rw *io = req->async_data;
|
2020-07-13 22:59:18 +03:00
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
memcpy(&io->s.iter, iter, sizeof(*iter));
|
|
|
|
io->free_iovec = iovec;
|
|
|
|
io->bytes_done = 0;
|
2020-08-13 18:47:43 +03:00
|
|
|
/* can only be fixed buffers, no need to do anything */
|
2020-11-24 02:20:27 +03:00
|
|
|
if (iov_iter_is_bvec(iter))
|
2020-08-13 18:47:43 +03:00
|
|
|
return;
|
2020-07-13 22:59:18 +03:00
|
|
|
if (!iovec) {
|
2020-08-13 18:47:43 +03:00
|
|
|
unsigned iov_off = 0;
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
io->s.iter.iov = io->s.fast_iov;
|
2020-08-13 18:47:43 +03:00
|
|
|
if (iter->iov != fast_iov) {
|
|
|
|
iov_off = iter->iov - fast_iov;
|
2022-06-13 15:57:44 +03:00
|
|
|
io->s.iter.iov += iov_off;
|
2020-08-13 18:47:43 +03:00
|
|
|
}
|
2022-06-13 15:57:44 +03:00
|
|
|
if (io->s.fast_iov != fast_iov)
|
|
|
|
memcpy(io->s.fast_iov + iov_off, fast_iov + iov_off,
|
2020-04-08 17:29:58 +03:00
|
|
|
sizeof(struct iovec) * iter->nr_segs);
|
2020-02-07 22:04:45 +03:00
|
|
|
} else {
|
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
2019-12-02 21:03:47 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-05-25 14:59:19 +03:00
|
|
|
bool io_alloc_async_data(struct io_kiocb *req)
|
2020-03-27 10:36:52 +03:00
|
|
|
{
|
2020-08-16 04:44:09 +03:00
|
|
|
WARN_ON_ONCE(!io_op_defs[req->opcode].async_size);
|
|
|
|
req->async_data = kmalloc(io_op_defs[req->opcode].async_size, GFP_KERNEL);
|
2021-10-04 22:02:56 +03:00
|
|
|
if (req->async_data) {
|
|
|
|
req->flags |= REQ_F_ASYNC_DATA;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
2020-03-27 10:36:52 +03:00
|
|
|
}
|
|
|
|
|
2020-08-13 18:47:43 +03:00
|
|
|
static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec,
|
2021-10-14 18:10:16 +03:00
|
|
|
struct io_rw_state *s, bool force)
|
2019-12-16 08:13:43 +03:00
|
|
|
{
|
2022-05-24 02:30:37 +03:00
|
|
|
if (!force && !io_op_defs[req->opcode].prep_async)
|
2020-01-14 05:23:24 +03:00
|
|
|
return 0;
|
2021-10-04 22:02:56 +03:00
|
|
|
if (!req_has_async_data(req)) {
|
2021-09-10 20:19:14 +03:00
|
|
|
struct io_async_rw *iorw;
|
|
|
|
|
2021-03-01 01:35:17 +03:00
|
|
|
if (io_alloc_async_data(req)) {
|
2021-02-04 16:52:01 +03:00
|
|
|
kfree(iovec);
|
2020-01-31 22:06:52 +03:00
|
|
|
return -ENOMEM;
|
2021-02-04 16:52:01 +03:00
|
|
|
}
|
2019-12-16 08:13:43 +03:00
|
|
|
|
2021-10-14 18:10:16 +03:00
|
|
|
io_req_map_rw(req, iovec, s->fast_iov, &s->iter);
|
2021-09-10 20:19:14 +03:00
|
|
|
iorw = req->async_data;
|
|
|
|
/* we've copied and mapped the iter, ensure state is saved */
|
2021-10-14 18:10:15 +03:00
|
|
|
iov_iter_save_state(&iorw->s.iter, &iorw->s.iter_state);
|
2020-01-31 22:06:52 +03:00
|
|
|
}
|
2019-12-16 08:13:43 +03:00
|
|
|
return 0;
|
2019-12-02 21:03:47 +03:00
|
|
|
}
|
|
|
|
|
2020-09-30 22:57:54 +03:00
|
|
|
static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
|
2020-07-13 22:59:19 +03:00
|
|
|
{
|
2020-08-16 04:44:09 +03:00
|
|
|
struct io_async_rw *iorw = req->async_data;
|
2021-10-14 18:10:18 +03:00
|
|
|
struct iovec *iov;
|
2021-02-04 16:52:06 +03:00
|
|
|
int ret;
|
2020-07-13 22:59:19 +03:00
|
|
|
|
2021-10-14 18:10:17 +03:00
|
|
|
/* submission path, ->uring_lock should already be taken */
|
2021-10-18 16:34:31 +03:00
|
|
|
ret = io_import_iovec(rw, req, &iov, &iorw->s, 0);
|
2020-07-13 22:59:19 +03:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
|
|
|
|
2020-09-06 00:45:47 +03:00
|
|
|
iorw->bytes_done = 0;
|
|
|
|
iorw->free_iovec = iov;
|
|
|
|
if (iov)
|
|
|
|
req->flags |= REQ_F_NEED_CLEANUP;
|
2020-07-13 22:59:19 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-05-24 02:16:21 +03:00
|
|
|
static int io_readv_prep_async(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
return io_rw_prep_async(req, READ);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_writev_prep_async(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
return io_rw_prep_async(req, WRITE);
|
|
|
|
}
|
|
|
|
|
2020-08-04 01:43:59 +03:00
|
|
|
/*
|
2020-12-31 01:58:40 +03:00
|
|
|
* This is our waitqueue callback handler, registered through __folio_lock_async()
|
2020-08-04 01:43:59 +03:00
|
|
|
* when we initially tried to do the IO with the iocb armed our waitqueue.
|
|
|
|
* This gets called when the page is unlocked, and we generally expect that to
|
|
|
|
* happen when the page IO is completed and the page is now uptodate. This will
|
|
|
|
* queue a task_work based retry of the operation, attempting to copy the data
|
|
|
|
* again. If the latter fails because the page was NOT uptodate, then we will
|
|
|
|
* do a thread based blocking retry of the operation. That's the unexpected
|
|
|
|
* slow path.
|
|
|
|
*/
|
2020-05-22 18:24:42 +03:00
|
|
|
static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode,
|
|
|
|
int sync, void *arg)
|
|
|
|
{
|
|
|
|
struct wait_page_queue *wpq;
|
|
|
|
struct io_kiocb *req = wait->private;
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2020-05-22 18:24:42 +03:00
|
|
|
struct wait_page_key *key = arg;
|
|
|
|
|
|
|
|
wpq = container_of(wait, struct wait_page_queue, wait);
|
|
|
|
|
2020-08-03 23:01:22 +03:00
|
|
|
if (!wake_page_match(wpq, key))
|
|
|
|
return 0;
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
rw->kiocb.ki_flags &= ~IOCB_WAITQ;
|
2020-05-22 18:24:42 +03:00
|
|
|
list_del_init(&wait->entry);
|
2021-02-12 06:23:53 +03:00
|
|
|
io_req_task_queue(req);
|
2020-05-22 18:24:42 +03:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2020-08-04 01:43:59 +03:00
|
|
|
/*
|
|
|
|
* This controls whether a given IO request should be armed for async page
|
|
|
|
* based retry. If we return false here, the request is handed to the async
|
|
|
|
* worker threads for retry. If we're doing buffered reads on a regular file,
|
|
|
|
* we prepare a private wait_page_queue entry and retry the operation. This
|
|
|
|
* will either succeed because the page is now uptodate and unlocked, or it
|
|
|
|
* will register a callback when the page is unlocked at IO completion. Through
|
|
|
|
* that callback, io_uring uses task_work to setup a retry of the operation.
|
|
|
|
* That retry will attempt the buffered read again. The retry will generally
|
|
|
|
* succeed, or in rare cases where it fails, we then fall back to using the
|
|
|
|
* async worker threads for a blocking retry.
|
|
|
|
*/
|
2020-08-13 20:51:40 +03:00
|
|
|
static bool io_rw_should_retry(struct io_kiocb *req)
|
2019-12-02 21:03:47 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_async_rw *io = req->async_data;
|
|
|
|
struct wait_page_queue *wait = &io->wpq;
|
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
|
|
|
struct kiocb *kiocb = &rw->kiocb;
|
2019-12-02 21:03:47 +03:00
|
|
|
|
2020-05-22 18:24:42 +03:00
|
|
|
/* never retry for NOWAIT, we just complete with -EAGAIN */
|
|
|
|
if (req->flags & REQ_F_NOWAIT)
|
|
|
|
return false;
|
2019-12-02 21:03:47 +03:00
|
|
|
|
2020-08-13 20:51:40 +03:00
|
|
|
/* Only for buffered IO */
|
2020-08-16 20:58:43 +03:00
|
|
|
if (kiocb->ki_flags & (IOCB_DIRECT | IOCB_HIPRI))
|
2020-05-22 18:24:42 +03:00
|
|
|
return false;
|
2020-08-16 20:58:43 +03:00
|
|
|
|
2020-05-22 18:24:42 +03:00
|
|
|
/*
|
|
|
|
* just use poll if we can, and don't attempt if the fs doesn't
|
|
|
|
* support callback based unlocks
|
|
|
|
*/
|
|
|
|
if (file_can_poll(req->file) || !(req->file->f_mode & FMODE_BUF_RASYNC))
|
|
|
|
return false;
|
2019-12-02 21:03:47 +03:00
|
|
|
|
2020-08-16 20:58:43 +03:00
|
|
|
wait->wait.func = io_async_buf_func;
|
|
|
|
wait->wait.private = req;
|
|
|
|
wait->wait.flags = 0;
|
|
|
|
INIT_LIST_HEAD(&wait->wait.entry);
|
|
|
|
kiocb->ki_flags |= IOCB_WAITQ;
|
2020-09-29 15:00:45 +03:00
|
|
|
kiocb->ki_flags &= ~IOCB_NOWAIT;
|
2020-08-16 20:58:43 +03:00
|
|
|
kiocb->ki_waitq = wait;
|
|
|
|
return true;
|
2020-05-22 18:24:42 +03:00
|
|
|
}
|
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
static inline int io_iter_do_read(struct io_rw *rw, struct iov_iter *iter)
|
2020-05-22 18:24:42 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct file *file = rw->kiocb.ki_filp;
|
|
|
|
|
|
|
|
if (likely(file->f_op->read_iter))
|
|
|
|
return call_read_iter(file, &rw->kiocb, iter);
|
|
|
|
else if (file->f_op->read)
|
|
|
|
return loop_rw_iter(READ, rw, iter);
|
2020-08-05 13:53:50 +03:00
|
|
|
else
|
|
|
|
return -EINVAL;
|
2019-12-02 21:03:47 +03:00
|
|
|
}
|
|
|
|
|
2021-08-21 18:07:51 +03:00
|
|
|
static bool need_read_all(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
return req->flags & REQ_F_ISREG ||
|
|
|
|
S_ISBLK(file_inode(req->file)->i_mode);
|
|
|
|
}
|
|
|
|
|
2022-03-29 19:48:05 +03:00
|
|
|
static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
|
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
|
|
|
struct kiocb *kiocb = &rw->kiocb;
|
2022-03-29 19:48:05 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct file *file = req->file;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (unlikely(!file || !(file->f_mode & mode)))
|
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
if (!io_req_ffs_set(req))
|
|
|
|
req->flags |= io_file_get_flags(file) << REQ_F_SUPPORT_NOWAIT_BIT;
|
|
|
|
|
|
|
|
kiocb->ki_flags = iocb_flags(file);
|
2022-06-13 15:57:44 +03:00
|
|
|
ret = kiocb_set_rw_flags(kiocb, rw->flags);
|
2022-03-29 19:48:05 +03:00
|
|
|
if (unlikely(ret))
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the file is marked O_NONBLOCK, still allow retry for it if it
|
|
|
|
* supports async. Otherwise it's impossible to use O_NONBLOCK files
|
|
|
|
* reliably. If not, or it IOCB_NOWAIT is set, don't retry.
|
|
|
|
*/
|
|
|
|
if ((kiocb->ki_flags & IOCB_NOWAIT) ||
|
|
|
|
((file->f_flags & O_NONBLOCK) && !io_file_supports_nowait(req)))
|
|
|
|
req->flags |= REQ_F_NOWAIT;
|
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL) {
|
|
|
|
if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll)
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
|
2022-04-28 19:57:52 +03:00
|
|
|
kiocb->private = NULL;
|
2022-03-29 19:48:05 +03:00
|
|
|
kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
|
|
|
|
kiocb->ki_complete = io_complete_rw_iopoll;
|
|
|
|
req->iopoll_completed = 0;
|
|
|
|
} else {
|
|
|
|
if (kiocb->ki_flags & IOCB_HIPRI)
|
|
|
|
return -EINVAL;
|
|
|
|
kiocb->ki_complete = io_complete_rw;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 03:03:09 +03:00
|
|
|
static int io_read(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2021-10-14 18:10:19 +03:00
|
|
|
struct io_rw_state __s, *s = &__s;
|
2021-10-14 18:10:16 +03:00
|
|
|
struct iovec *iovec;
|
2022-06-13 15:57:44 +03:00
|
|
|
struct kiocb *kiocb = &rw->kiocb;
|
2021-02-10 03:03:07 +03:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_async_rw *io;
|
2021-09-10 20:19:14 +03:00
|
|
|
ssize_t ret, ret2;
|
2022-02-22 13:55:03 +03:00
|
|
|
loff_t *ppos;
|
2020-08-13 18:47:43 +03:00
|
|
|
|
2021-10-14 18:10:19 +03:00
|
|
|
if (!req_has_async_data(req)) {
|
|
|
|
ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
|
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
|
|
|
} else {
|
2022-06-13 15:57:44 +03:00
|
|
|
io = req->async_data;
|
|
|
|
s = &io->s;
|
2022-06-30 16:20:06 +03:00
|
|
|
|
2022-03-10 19:54:25 +03:00
|
|
|
/*
|
|
|
|
* Safe and required to re-import if we're using provided
|
|
|
|
* buffers, as we dropped the selected one before retry.
|
|
|
|
*/
|
2022-06-30 16:20:06 +03:00
|
|
|
if (io_do_buffer_select(req)) {
|
2022-03-10 19:54:25 +03:00
|
|
|
ret = io_import_iovec(READ, req, &iovec, s, issue_flags);
|
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-09-10 20:19:14 +03:00
|
|
|
/*
|
|
|
|
* We come here from an earlier attempt, restore our state to
|
|
|
|
* match in case it doesn't. It's cheap enough that we don't
|
|
|
|
* need to make this conditional.
|
|
|
|
*/
|
2021-10-14 18:10:16 +03:00
|
|
|
iov_iter_restore(&s->iter, &s->iter_state);
|
2020-11-07 16:16:27 +03:00
|
|
|
iovec = NULL;
|
|
|
|
}
|
2022-03-29 19:48:05 +03:00
|
|
|
ret = io_rw_init_file(req, FMODE_READ);
|
2022-04-17 06:14:00 +03:00
|
|
|
if (unlikely(ret)) {
|
|
|
|
kfree(iovec);
|
2022-03-29 19:48:05 +03:00
|
|
|
return ret;
|
2022-04-17 06:14:00 +03:00
|
|
|
}
|
2022-04-12 17:09:43 +03:00
|
|
|
req->cqe.res = iov_iter_count(&s->iter);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2021-10-14 18:10:19 +03:00
|
|
|
if (force_nonblock) {
|
|
|
|
/* If the file doesn't support async, just async punt */
|
2021-10-17 02:07:09 +03:00
|
|
|
if (unlikely(!io_file_supports_nowait(req))) {
|
2021-10-14 18:10:19 +03:00
|
|
|
ret = io_setup_async_rw(req, iovec, s, true);
|
|
|
|
return ret ?: -EAGAIN;
|
|
|
|
}
|
2020-09-30 22:57:53 +03:00
|
|
|
kiocb->ki_flags |= IOCB_NOWAIT;
|
2021-10-14 18:10:19 +03:00
|
|
|
} else {
|
|
|
|
/* Ensure we clear previously set non-block flag */
|
|
|
|
kiocb->ki_flags &= ~IOCB_NOWAIT;
|
2021-02-04 16:51:59 +03:00
|
|
|
}
|
2019-05-11 01:07:28 +03:00
|
|
|
|
2022-02-22 13:55:03 +03:00
|
|
|
ppos = io_kiocb_update_pos(req);
|
2022-02-22 13:55:02 +03:00
|
|
|
|
2022-04-12 17:09:43 +03:00
|
|
|
ret = rw_verify_area(READ, req->file, ppos, req->cqe.res);
|
2021-02-04 16:52:03 +03:00
|
|
|
if (unlikely(ret)) {
|
|
|
|
kfree(iovec);
|
|
|
|
return ret;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
ret = io_iter_do_read(rw, &s->iter);
|
2019-09-23 20:05:34 +03:00
|
|
|
|
2021-04-02 05:41:15 +03:00
|
|
|
if (ret == -EAGAIN || (req->flags & REQ_F_REISSUE)) {
|
2021-04-08 03:54:39 +03:00
|
|
|
req->flags &= ~REQ_F_REISSUE;
|
2022-03-10 02:46:07 +03:00
|
|
|
/* if we can poll, just do that */
|
|
|
|
if (req->opcode == IORING_OP_READ && file_can_poll(req->file))
|
|
|
|
return -EAGAIN;
|
2020-08-28 01:40:19 +03:00
|
|
|
/* IOPOLL retry should happen for io-wq threads */
|
|
|
|
if (!force_nonblock && !(req->ctx->flags & IORING_SETUP_IOPOLL))
|
2020-08-16 01:58:42 +03:00
|
|
|
goto done;
|
2021-02-04 16:52:05 +03:00
|
|
|
/* no retry on NONBLOCK nor RWF_NOWAIT */
|
|
|
|
if (req->flags & REQ_F_NOWAIT)
|
2020-09-02 18:30:31 +03:00
|
|
|
goto done;
|
2020-09-26 00:23:43 +03:00
|
|
|
ret = 0;
|
2021-04-02 05:41:15 +03:00
|
|
|
} else if (ret == -EIOCBQUEUED) {
|
|
|
|
goto out_free;
|
2022-04-12 17:09:43 +03:00
|
|
|
} else if (ret == req->cqe.res || ret <= 0 || !force_nonblock ||
|
2021-08-21 18:07:51 +03:00
|
|
|
(req->flags & REQ_F_NOWAIT) || !need_read_all(req)) {
|
2021-02-04 16:52:02 +03:00
|
|
|
/* read all, failed, already did sync or don't want to retry */
|
2020-08-25 21:59:22 +03:00
|
|
|
goto done;
|
2020-08-13 20:51:40 +03:00
|
|
|
}
|
|
|
|
|
2021-09-10 20:19:14 +03:00
|
|
|
/*
|
|
|
|
* Don't depend on the iter state matching what was consumed, or being
|
|
|
|
* untouched in case of error. Restore it and we'll advance it
|
|
|
|
* manually if we need to.
|
|
|
|
*/
|
2021-10-14 18:10:16 +03:00
|
|
|
iov_iter_restore(&s->iter, &s->iter_state);
|
2021-09-10 20:19:14 +03:00
|
|
|
|
2021-10-14 18:10:16 +03:00
|
|
|
ret2 = io_setup_async_rw(req, iovec, s, true);
|
2021-02-04 16:52:01 +03:00
|
|
|
if (ret2)
|
|
|
|
return ret2;
|
|
|
|
|
2021-02-18 00:02:36 +03:00
|
|
|
iovec = NULL;
|
2022-06-13 15:57:44 +03:00
|
|
|
io = req->async_data;
|
|
|
|
s = &io->s;
|
2021-09-10 20:19:14 +03:00
|
|
|
/*
|
|
|
|
* Now use our persistent iterator and state, if we aren't already.
|
|
|
|
* We've restored and mapped the iter to match.
|
|
|
|
*/
|
2020-08-13 20:51:40 +03:00
|
|
|
|
2021-02-04 16:52:04 +03:00
|
|
|
do {
|
2021-09-10 20:19:14 +03:00
|
|
|
/*
|
|
|
|
* We end up here because of a partial read, either from
|
|
|
|
* above or inside this loop. Advance the iter by the bytes
|
|
|
|
* that were consumed.
|
|
|
|
*/
|
2021-10-14 18:10:16 +03:00
|
|
|
iov_iter_advance(&s->iter, ret);
|
|
|
|
if (!iov_iter_count(&s->iter))
|
2021-09-10 20:19:14 +03:00
|
|
|
break;
|
2022-06-13 15:57:44 +03:00
|
|
|
io->bytes_done += ret;
|
2021-10-14 18:10:16 +03:00
|
|
|
iov_iter_save_state(&s->iter, &s->iter_state);
|
2021-09-10 20:19:14 +03:00
|
|
|
|
2021-02-04 16:52:04 +03:00
|
|
|
/* if we can retry, do so with the callbacks armed */
|
|
|
|
if (!io_rw_should_retry(req)) {
|
|
|
|
kiocb->ki_flags &= ~IOCB_WAITQ;
|
|
|
|
return -EAGAIN;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now retry read with the IOCB_WAITQ parts set in the iocb. If
|
|
|
|
* we get -EIOCBQUEUED, then we'll get a notification when the
|
|
|
|
* desired page gets unlocked. We can also get a partial read
|
|
|
|
* here, and if we do, then just retry at the new offset.
|
|
|
|
*/
|
2022-06-13 15:57:44 +03:00
|
|
|
ret = io_iter_do_read(rw, &s->iter);
|
2021-02-04 16:52:04 +03:00
|
|
|
if (ret == -EIOCBQUEUED)
|
2022-05-25 00:21:00 +03:00
|
|
|
return IOU_ISSUE_SKIP_COMPLETE;
|
2020-08-13 20:51:40 +03:00
|
|
|
/* we got some bytes, but not all. retry. */
|
2021-03-05 07:02:58 +03:00
|
|
|
kiocb->ki_flags &= ~IOCB_WAITQ;
|
2021-10-14 18:10:16 +03:00
|
|
|
iov_iter_restore(&s->iter, &s->iter_state);
|
2021-09-10 20:19:14 +03:00
|
|
|
} while (ret > 0);
|
2020-08-13 20:51:40 +03:00
|
|
|
done:
|
2021-11-23 03:07:49 +03:00
|
|
|
kiocb_done(req, ret, issue_flags);
|
2021-02-18 00:02:36 +03:00
|
|
|
out_free:
|
|
|
|
/* it's faster to check here then delegate to kfree */
|
|
|
|
if (iovec)
|
|
|
|
kfree(iovec);
|
2022-05-25 00:21:00 +03:00
|
|
|
return IOU_ISSUE_SKIP_COMPLETE;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-02-10 03:03:09 +03:00
|
|
|
static int io_write(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_rw *rw = io_kiocb_to_cmd(req);
|
2021-10-14 18:10:19 +03:00
|
|
|
struct io_rw_state __s, *s = &__s;
|
2021-10-14 18:10:16 +03:00
|
|
|
struct iovec *iovec;
|
2022-06-13 15:57:44 +03:00
|
|
|
struct kiocb *kiocb = &rw->kiocb;
|
2021-02-10 03:03:07 +03:00
|
|
|
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
|
2021-09-10 20:19:14 +03:00
|
|
|
ssize_t ret, ret2;
|
2022-02-22 13:55:03 +03:00
|
|
|
loff_t *ppos;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2021-10-14 18:10:19 +03:00
|
|
|
if (!req_has_async_data(req)) {
|
2021-10-14 18:10:18 +03:00
|
|
|
ret = io_import_iovec(WRITE, req, &iovec, s, issue_flags);
|
|
|
|
if (unlikely(ret < 0))
|
2020-11-07 16:16:27 +03:00
|
|
|
return ret;
|
2021-10-14 18:10:19 +03:00
|
|
|
} else {
|
2022-06-13 15:57:44 +03:00
|
|
|
struct io_async_rw *io = req->async_data;
|
2021-10-14 18:10:19 +03:00
|
|
|
|
2022-06-13 15:57:44 +03:00
|
|
|
s = &io->s;
|
2021-10-14 18:10:19 +03:00
|
|
|
iov_iter_restore(&s->iter, &s->iter_state);
|
2020-11-07 16:16:27 +03:00
|
|
|
iovec = NULL;
|
|
|
|
}
|
2022-03-29 19:48:05 +03:00
|
|
|
ret = io_rw_init_file(req, FMODE_WRITE);
|
2022-04-17 06:14:00 +03:00
|
|
|
if (unlikely(ret)) {
|
|
|
|
kfree(iovec);
|
2022-03-29 19:48:05 +03:00
|
|
|
return ret;
|
2022-04-17 06:14:00 +03:00
|
|
|
}
|
2022-04-12 17:09:43 +03:00
|
|
|
req->cqe.res = iov_iter_count(&s->iter);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2021-10-14 18:10:19 +03:00
|
|
|
if (force_nonblock) {
|
|
|
|
/* If the file doesn't support async, just async punt */
|
2021-10-17 02:07:09 +03:00
|
|
|
if (unlikely(!io_file_supports_nowait(req)))
|
2021-10-14 18:10:19 +03:00
|
|
|
goto copy_iov;
|
2019-12-18 22:19:41 +03:00
|
|
|
|
2021-10-14 18:10:19 +03:00
|
|
|
/* file path doesn't support NOWAIT for non-direct_IO */
|
|
|
|
if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) &&
|
|
|
|
(req->flags & REQ_F_ISREG))
|
|
|
|
goto copy_iov;
|
2019-01-19 08:56:34 +03:00
|
|
|
|
2021-10-14 18:10:19 +03:00
|
|
|
kiocb->ki_flags |= IOCB_NOWAIT;
|
|
|
|
} else {
|
|
|
|
/* Ensure we clear previously set non-block flag */
|
|
|
|
kiocb->ki_flags &= ~IOCB_NOWAIT;
|
|
|
|
}
|
2019-01-19 08:56:34 +03:00
|
|
|
|
2022-02-22 13:55:03 +03:00
|
|
|
ppos = io_kiocb_update_pos(req);
|
2022-02-22 13:55:02 +03:00
|
|
|
|
2022-04-12 17:09:43 +03:00
|
|
|
ret = rw_verify_area(WRITE, req->file, ppos, req->cqe.res);
|
2020-08-01 13:50:02 +03:00
|
|
|
if (unlikely(ret))
|
|
|
|
goto out_free;
|
2020-03-20 20:23:41 +03:00
|
|
|
|
2020-08-01 13:50:02 +03:00
|
|
|
/*
|
|
|
|
* Open-code file_start_write here to grab freeze protection,
|
|
|
|
* which will be released by another thread in
|
|
|
|
* io_complete_rw(). Fool lockdep by telling it the lock got
|
|
|
|
* released so that it doesn't complain about the held lock when
|
|
|
|
* we return to userspace.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_ISREG) {
|
2020-11-11 03:50:21 +03:00
|
|
|
sb_start_write(file_inode(req->file)->i_sb);
|
2020-08-01 13:50:02 +03:00
|
|
|
__sb_writers_release(file_inode(req->file)->i_sb,
|
|
|
|
SB_FREEZE_WRITE);
|
|
|
|
}
|
|
|
|
kiocb->ki_flags |= IOCB_WRITE;
|
2020-03-20 20:23:41 +03:00
|
|
|
|
2021-10-17 02:07:09 +03:00
|
|
|
if (likely(req->file->f_op->write_iter))
|
2021-10-14 18:10:16 +03:00
|
|
|
ret2 = call_write_iter(req->file, kiocb, &s->iter);
|
2020-08-05 13:53:50 +03:00
|
|
|
else if (req->file->f_op->write)
|
2022-06-13 15:57:44 +03:00
|
|
|
ret2 = loop_rw_iter(WRITE, rw, &s->iter);
|
2020-08-05 13:53:50 +03:00
|
|
|
else
|
|
|
|
ret2 = -EINVAL;
|
2020-03-20 20:23:41 +03:00
|
|
|
|
2021-04-08 03:54:39 +03:00
|
|
|
if (req->flags & REQ_F_REISSUE) {
|
|
|
|
req->flags &= ~REQ_F_REISSUE;
|
2021-04-02 05:41:15 +03:00
|
|
|
ret2 = -EAGAIN;
|
2021-04-08 03:54:39 +03:00
|
|
|
}
|
2021-04-02 05:41:15 +03:00
|
|
|
|
2020-08-01 13:50:02 +03:00
|
|
|
/*
|
|
|
|
* Raw bdev writes will return -EOPNOTSUPP for IOCB_NOWAIT. Just
|
|
|
|
* retry them without IOCB_NOWAIT.
|
|
|
|
*/
|
|
|
|
if (ret2 == -EOPNOTSUPP && (kiocb->ki_flags & IOCB_NOWAIT))
|
|
|
|
ret2 = -EAGAIN;
|
2021-02-04 16:52:05 +03:00
|
|
|
/* no retry on NONBLOCK nor RWF_NOWAIT */
|
|
|
|
if (ret2 == -EAGAIN && (req->flags & REQ_F_NOWAIT))
|
2020-09-02 18:30:31 +03:00
|
|
|
goto done;
|
2020-08-01 13:50:02 +03:00
|
|
|
if (!force_nonblock || ret2 != -EAGAIN) {
|
2020-08-28 01:40:19 +03:00
|
|
|
/* IOPOLL retry should happen for io-wq threads */
|
2021-10-17 04:32:29 +03:00
|
|
|
if (ret2 == -EAGAIN && (req->ctx->flags & IORING_SETUP_IOPOLL))
|
2020-08-28 01:40:19 +03:00
|
|
|
goto copy_iov;
|
2020-09-02 18:30:31 +03:00
|
|
|
done:
|
2021-11-23 03:07:49 +03:00
|
|
|
kiocb_done(req, ret2, issue_flags);
|
2022-05-25 00:21:00 +03:00
|
|
|
ret = IOU_ISSUE_SKIP_COMPLETE;
|
2020-08-01 13:50:02 +03:00
|
|
|
} else {
|
2019-12-02 21:03:47 +03:00
|
|
|
copy_iov:
|
2021-10-14 18:10:16 +03:00
|
|
|
iov_iter_restore(&s->iter, &s->iter_state);
|
|
|
|
ret = io_setup_async_rw(req, iovec, s, false);
|
2021-02-04 16:52:01 +03:00
|
|
|
return ret ?: -EAGAIN;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
2019-01-19 08:56:34 +03:00
|
|
|
out_free:
|
2020-08-20 11:34:10 +03:00
|
|
|
/* it's reportedly faster than delegating the null check to kfree() */
|
2020-07-13 22:59:20 +03:00
|
|
|
if (iovec)
|
2020-06-18 10:01:56 +03:00
|
|
|
kfree(iovec);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2022-05-27 05:54:00 +03:00
|
|
|
/*
|
|
|
|
* Note when io_fixed_fd_install() returns error value, it will ensure
|
|
|
|
* fput() is called correspondingly.
|
|
|
|
*/
|
2022-05-25 06:54:43 +03:00
|
|
|
int io_fixed_fd_install(struct io_kiocb *req, unsigned int issue_flags,
|
|
|
|
struct file *file, unsigned int file_slot)
|
2022-05-07 23:18:44 +03:00
|
|
|
{
|
|
|
|
bool alloc_slot = file_slot == IORING_FILE_INDEX_ALLOC;
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
int ret;
|
|
|
|
|
2022-06-01 17:28:44 +03:00
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
|
|
|
|
2022-05-07 23:18:44 +03:00
|
|
|
if (alloc_slot) {
|
|
|
|
ret = io_file_bitmap_get(ctx);
|
2022-06-01 17:28:44 +03:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
goto err;
|
2022-05-07 23:18:44 +03:00
|
|
|
file_slot = ret;
|
|
|
|
} else {
|
|
|
|
file_slot--;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = io_install_fixed_file(req, file, issue_flags, file_slot);
|
2022-06-01 17:28:44 +03:00
|
|
|
if (!ret && alloc_slot)
|
|
|
|
ret = file_slot;
|
|
|
|
err:
|
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
|
|
|
if (unlikely(ret < 0))
|
|
|
|
fput(file);
|
2022-05-07 23:18:44 +03:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-03-03 02:32:28 +03:00
|
|
|
static int io_remove_buffers_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2022-05-24 19:03:49 +03:00
|
|
|
struct io_provide_buf *p = io_kiocb_to_cmd(req);
|
2020-03-03 02:32:28 +03:00
|
|
|
u64 tmp;
|
|
|
|
|
2022-04-26 20:34:56 +03:00
|
|
|
if (sqe->rw_flags || sqe->addr || sqe->len || sqe->off ||
|
2021-08-20 12:36:37 +03:00
|
|
|
sqe->splice_fd_in)
|
2020-03-03 02:32:28 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
tmp = READ_ONCE(sqe->fd);
|
|
|
|
if (!tmp || tmp > USHRT_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
memset(p, 0, sizeof(*p));
|
|
|
|
p->nbufs = tmp;
|
|
|
|
p->bgid = READ_ONCE(sqe->buf_group);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-03-18 02:20:10 +03:00
|
|
|
static int __io_remove_buffers(struct io_ring_ctx *ctx,
|
|
|
|
struct io_buffer_list *bl, unsigned nbufs)
|
2020-03-03 02:32:28 +03:00
|
|
|
{
|
|
|
|
unsigned i = 0;
|
|
|
|
|
|
|
|
/* shouldn't happen */
|
|
|
|
if (!nbufs)
|
|
|
|
return 0;
|
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (bl->buf_nr_pages) {
|
|
|
|
int j;
|
|
|
|
|
|
|
|
i = bl->buf_ring->tail - bl->head;
|
|
|
|
for (j = 0; j < bl->buf_nr_pages; j++)
|
|
|
|
unpin_user_page(bl->buf_pages[j]);
|
|
|
|
kvfree(bl->buf_pages);
|
|
|
|
bl->buf_pages = NULL;
|
|
|
|
bl->buf_nr_pages = 0;
|
2022-05-18 23:36:18 +03:00
|
|
|
/* make sure it's seen as empty */
|
|
|
|
INIT_LIST_HEAD(&bl->buf_list);
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2020-03-03 02:32:28 +03:00
|
|
|
/* the head kbuf is the list itself */
|
2022-03-18 02:20:10 +03:00
|
|
|
while (!list_empty(&bl->buf_list)) {
|
2020-03-03 02:32:28 +03:00
|
|
|
struct io_buffer *nxt;
|
|
|
|
|
2022-03-18 02:20:10 +03:00
|
|
|
nxt = list_first_entry(&bl->buf_list, struct io_buffer, list);
|
2020-03-03 02:32:28 +03:00
|
|
|
list_del(&nxt->list);
|
|
|
|
if (++i == nbufs)
|
|
|
|
return i;
|
io_uring: fix soft lockup when call __io_remove_buffers
I got issue as follows:
[ 567.094140] __io_remove_buffers: [1]start ctx=0xffff8881067bf000 bgid=65533 buf=0xffff8881fefe1680
[ 594.360799] watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [kworker/u32:5:108]
[ 594.364987] Modules linked in:
[ 594.365405] irq event stamp: 604180238
[ 594.365906] hardirqs last enabled at (604180237): [<ffffffff93fec9bd>] _raw_spin_unlock_irqrestore+0x2d/0x50
[ 594.367181] hardirqs last disabled at (604180238): [<ffffffff93fbbadb>] sysvec_apic_timer_interrupt+0xb/0xc0
[ 594.368420] softirqs last enabled at (569080666): [<ffffffff94200654>] __do_softirq+0x654/0xa9e
[ 594.369551] softirqs last disabled at (569080575): [<ffffffff913e1d6a>] irq_exit_rcu+0x1ca/0x250
[ 594.370692] CPU: 2 PID: 108 Comm: kworker/u32:5 Tainted: G L 5.15.0-next-20211112+ #88
[ 594.371891] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 04/01/2014
[ 594.373604] Workqueue: events_unbound io_ring_exit_work
[ 594.374303] RIP: 0010:_raw_spin_unlock_irqrestore+0x33/0x50
[ 594.375037] Code: 48 83 c7 18 53 48 89 f3 48 8b 74 24 10 e8 55 f5 55 fd 48 89 ef e8 ed a7 56 fd 80 e7 02 74 06 e8 43 13 7b fd fb bf 01 00 00 00 <e8> f8 78 474
[ 594.377433] RSP: 0018:ffff888101587a70 EFLAGS: 00000202
[ 594.378120] RAX: 0000000024030f0d RBX: 0000000000000246 RCX: 1ffffffff2f09106
[ 594.379053] RDX: 0000000000000000 RSI: ffffffff9449f0e0 RDI: 0000000000000001
[ 594.379991] RBP: ffffffff9586cdc0 R08: 0000000000000001 R09: fffffbfff2effcab
[ 594.380923] R10: ffffffff977fe557 R11: fffffbfff2effcaa R12: ffff8881b8f3def0
[ 594.381858] R13: 0000000000000246 R14: ffff888153a8b070 R15: 0000000000000000
[ 594.382787] FS: 0000000000000000(0000) GS:ffff888399c00000(0000) knlGS:0000000000000000
[ 594.383851] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 594.384602] CR2: 00007fcbe71d2000 CR3: 00000000b4216000 CR4: 00000000000006e0
[ 594.385540] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 594.386474] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 594.387403] Call Trace:
[ 594.387738] <TASK>
[ 594.388042] find_and_remove_object+0x118/0x160
[ 594.389321] delete_object_full+0xc/0x20
[ 594.389852] kfree+0x193/0x470
[ 594.390275] __io_remove_buffers.part.0+0xed/0x147
[ 594.390931] io_ring_ctx_free+0x342/0x6a2
[ 594.392159] io_ring_exit_work+0x41e/0x486
[ 594.396419] process_one_work+0x906/0x15a0
[ 594.399185] worker_thread+0x8b/0xd80
[ 594.400259] kthread+0x3bf/0x4a0
[ 594.401847] ret_from_fork+0x22/0x30
[ 594.402343] </TASK>
Message from syslogd@localhost at Nov 13 09:09:54 ...
kernel:watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [kworker/u32:5:108]
[ 596.793660] __io_remove_buffers: [2099199]start ctx=0xffff8881067bf000 bgid=65533 buf=0xffff8881fefe1680
We can reproduce this issue by follow syzkaller log:
r0 = syz_io_uring_setup(0x401, &(0x7f0000000300), &(0x7f0000003000/0x2000)=nil, &(0x7f0000ff8000/0x4000)=nil, &(0x7f0000000280)=<r1=>0x0, &(0x7f0000000380)=<r2=>0x0)
sendmsg$ETHTOOL_MSG_FEATURES_SET(0xffffffffffffffff, &(0x7f0000003080)={0x0, 0x0, &(0x7f0000003040)={&(0x7f0000000040)=ANY=[], 0x18}}, 0x0)
syz_io_uring_submit(r1, r2, &(0x7f0000000240)=@IORING_OP_PROVIDE_BUFFERS={0x1f, 0x5, 0x0, 0x401, 0x1, 0x0, 0x100, 0x0, 0x1, {0xfffd}}, 0x0)
io_uring_enter(r0, 0x3a2d, 0x0, 0x0, 0x0, 0x0)
The reason above issue is 'buf->list' has 2,100,000 nodes, occupied cpu lead
to soft lockup.
To solve this issue, we need add schedule point when do while loop in
'__io_remove_buffers'.
After add schedule point we do regression, get follow data.
[ 240.141864] __io_remove_buffers: [1]start ctx=0xffff888170603000 bgid=65533 buf=0xffff8881116fcb00
[ 268.408260] __io_remove_buffers: [1]start ctx=0xffff8881b92d2000 bgid=65533 buf=0xffff888130c83180
[ 275.899234] __io_remove_buffers: [2099199]start ctx=0xffff888170603000 bgid=65533 buf=0xffff8881116fcb00
[ 296.741404] __io_remove_buffers: [1]start ctx=0xffff8881b659c000 bgid=65533 buf=0xffff8881010fe380
[ 305.090059] __io_remove_buffers: [2099199]start ctx=0xffff8881b92d2000 bgid=65533 buf=0xffff888130c83180
[ 325.415746] __io_remove_buffers: [1]start ctx=0xffff8881b92d1000 bgid=65533 buf=0xffff8881a17d8f00
[ 333.160318] __io_remove_buffers: [2099199]start ctx=0xffff8881b659c000 bgid=65533 buf=0xffff8881010fe380
...
Fixes:8bab4c09f24e("io_uring: allow conditional reschedule for intensive iterators")
Signed-off-by: Ye Bin <yebin10@huawei.com>
Link: https://lore.kernel.org/r/20211122024737.2198530-1-yebin10@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-22 05:47:37 +03:00
|
|
|
cond_resched();
|
2020-03-03 02:32:28 +03:00
|
|
|
}
|
|
|
|
i++;
|
|
|
|
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2021-02-10 03:03:09 +03:00
|
|
|
static int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags)
|
2020-03-03 02:32:28 +03:00
|
|
|
{
|
2022-05-24 19:03:49 +03:00
|
|
|
struct io_provide_buf *p = io_kiocb_to_cmd(req);
|
2020-03-03 02:32:28 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2022-03-18 02:20:10 +03:00
|
|
|
struct io_buffer_list *bl;
|
2020-03-03 02:32:28 +03:00
|
|
|
int ret = 0;
|
|
|
|
|
2022-03-25 14:52:14 +03:00
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
2020-03-03 02:32:28 +03:00
|
|
|
|
|
|
|
ret = -ENOENT;
|
2022-03-18 02:20:10 +03:00
|
|
|
bl = io_buffer_get_list(ctx, p->bgid);
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (bl) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
/* can't use provide/remove buffers command on mapped buffers */
|
|
|
|
if (!bl->buf_nr_pages)
|
|
|
|
ret = __io_remove_buffers(ctx, bl, p->nbufs);
|
|
|
|
}
|
2020-03-03 02:32:28 +03:00
|
|
|
if (ret < 0)
|
2021-05-17 00:58:05 +03:00
|
|
|
req_set_fail(req);
|
2020-03-03 02:32:28 +03:00
|
|
|
|
2021-03-01 01:35:13 +03:00
|
|
|
/* complete before unlock, IOPOLL may need the lock */
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, ret, 0);
|
|
|
|
__io_req_complete(req, issue_flags);
|
2022-03-25 14:52:14 +03:00
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
2022-05-25 00:21:00 +03:00
|
|
|
return IOU_ISSUE_SKIP_COMPLETE;
|
2020-03-03 02:32:28 +03:00
|
|
|
}
|
|
|
|
|
2020-02-24 02:41:33 +03:00
|
|
|
static int io_provide_buffers_prep(struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2021-04-15 15:07:39 +03:00
|
|
|
unsigned long size, tmp_check;
|
2022-05-24 19:03:49 +03:00
|
|
|
struct io_provide_buf *p = io_kiocb_to_cmd(req);
|
2020-02-24 02:41:33 +03:00
|
|
|
u64 tmp;
|
|
|
|
|
2022-04-26 20:34:56 +03:00
|
|
|
if (sqe->rw_flags || sqe->splice_fd_in)
|
2020-02-24 02:41:33 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
tmp = READ_ONCE(sqe->fd);
|
|
|
|
if (!tmp || tmp > USHRT_MAX)
|
|
|
|
return -E2BIG;
|
|
|
|
p->nbufs = tmp;
|
|
|
|
p->addr = READ_ONCE(sqe->addr);
|
|
|
|
p->len = READ_ONCE(sqe->len);
|
|
|
|
|
2021-04-15 15:07:39 +03:00
|
|
|
if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
|
|
|
|
&size))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
2021-03-19 13:21:19 +03:00
|
|
|
size = (unsigned long)p->len * p->nbufs;
|
|
|
|
if (!access_ok(u64_to_user_ptr(p->addr), size))
|
2020-02-24 02:41:33 +03:00
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
p->bgid = READ_ONCE(sqe->buf_group);
|
|
|
|
tmp = READ_ONCE(sqe->off);
|
|
|
|
if (tmp > USHRT_MAX)
|
|
|
|
return -E2BIG;
|
|
|
|
p->bid = tmp;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-03-09 03:46:52 +03:00
|
|
|
static int io_refill_buffer_cache(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_buffer *buf;
|
|
|
|
struct page *page;
|
|
|
|
int bufs_in_page;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Completions that don't happen inline (eg not under uring_lock) will
|
|
|
|
* add to ->io_buffers_comp. If we don't have any free buffers, check
|
|
|
|
* the completion list and splice those entries first.
|
|
|
|
*/
|
|
|
|
if (!list_empty_careful(&ctx->io_buffers_comp)) {
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
if (!list_empty(&ctx->io_buffers_comp)) {
|
|
|
|
list_splice_init(&ctx->io_buffers_comp,
|
|
|
|
&ctx->io_buffers_cache);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No free buffers and no completion entries either. Allocate a new
|
|
|
|
* page worth of buffer entries and add those to our freelist.
|
|
|
|
*/
|
|
|
|
page = alloc_page(GFP_KERNEL_ACCOUNT);
|
|
|
|
if (!page)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
list_add(&page->lru, &ctx->io_buffers_pages);
|
|
|
|
|
|
|
|
buf = page_address(page);
|
|
|
|
bufs_in_page = PAGE_SIZE / sizeof(*buf);
|
|
|
|
while (bufs_in_page) {
|
|
|
|
list_add_tail(&buf->list, &ctx->io_buffers_cache);
|
|
|
|
buf++;
|
|
|
|
bufs_in_page--;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_add_buffers(struct io_ring_ctx *ctx, struct io_provide_buf *pbuf,
|
2022-03-18 02:20:10 +03:00
|
|
|
struct io_buffer_list *bl)
|
2020-02-24 02:41:33 +03:00
|
|
|
{
|
|
|
|
struct io_buffer *buf;
|
|
|
|
u64 addr = pbuf->addr;
|
|
|
|
int i, bid = pbuf->bid;
|
|
|
|
|
|
|
|
for (i = 0; i < pbuf->nbufs; i++) {
|
2022-03-09 03:46:52 +03:00
|
|
|
if (list_empty(&ctx->io_buffers_cache) &&
|
|
|
|
io_refill_buffer_cache(ctx))
|
2020-02-24 02:41:33 +03:00
|
|
|
break;
|
2022-03-09 03:46:52 +03:00
|
|
|
buf = list_first_entry(&ctx->io_buffers_cache, struct io_buffer,
|
|
|
|
list);
|
2022-03-18 02:20:10 +03:00
|
|
|
list_move_tail(&buf->list, &bl->buf_list);
|
2020-02-24 02:41:33 +03:00
|
|
|
buf->addr = addr;
|
2021-05-05 15:47:06 +03:00
|
|
|
buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
|
2020-02-24 02:41:33 +03:00
|
|
|
buf->bid = bid;
|
2022-03-09 21:27:52 +03:00
|
|
|
buf->bgid = pbuf->bgid;
|
2020-02-24 02:41:33 +03:00
|
|
|
addr += pbuf->len;
|
|
|
|
bid++;
|
2022-02-15 07:10:03 +03:00
|
|
|
cond_resched();
|
2020-02-24 02:41:33 +03:00
|
|
|
}
|
|
|
|
|
2022-03-18 02:20:10 +03:00
|
|
|
return i ? 0 : -ENOMEM;
|
2020-02-24 02:41:33 +03:00
|
|
|
}
|
|
|
|
|
2022-05-01 19:52:44 +03:00
|
|
|
static __cold int io_init_bl_list(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
ctx->io_bl = kcalloc(BGID_ARRAY, sizeof(struct io_buffer_list),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!ctx->io_bl)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
for (i = 0; i < BGID_ARRAY; i++) {
|
|
|
|
INIT_LIST_HEAD(&ctx->io_bl[i].buf_list);
|
|
|
|
ctx->io_bl[i].bgid = i;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 03:03:09 +03:00
|
|
|
static int io_provide_buffers(struct io_kiocb *req, unsigned int issue_flags)
|
2020-02-24 02:41:33 +03:00
|
|
|
{
|
2022-05-24 19:03:49 +03:00
|
|
|
struct io_provide_buf *p = io_kiocb_to_cmd(req);
|
2020-02-24 02:41:33 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2022-03-18 02:20:10 +03:00
|
|
|
struct io_buffer_list *bl;
|
2020-02-24 02:41:33 +03:00
|
|
|
int ret = 0;
|
|
|
|
|
2022-03-25 14:52:14 +03:00
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
2020-02-24 02:41:33 +03:00
|
|
|
|
2022-05-01 19:52:44 +03:00
|
|
|
if (unlikely(p->bgid < BGID_ARRAY && !ctx->io_bl)) {
|
|
|
|
ret = io_init_bl_list(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
}
|
2020-02-24 02:41:33 +03:00
|
|
|
|
2022-03-18 02:20:10 +03:00
|
|
|
bl = io_buffer_get_list(ctx, p->bgid);
|
|
|
|
if (unlikely(!bl)) {
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
bl = kzalloc(sizeof(*bl), GFP_KERNEL);
|
2022-03-18 02:20:10 +03:00
|
|
|
if (!bl) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto err;
|
|
|
|
}
|
2022-05-18 23:36:18 +03:00
|
|
|
INIT_LIST_HEAD(&bl->buf_list);
|
2022-05-01 19:52:44 +03:00
|
|
|
ret = io_buffer_add_list(ctx, bl, p->bgid);
|
|
|
|
if (ret) {
|
|
|
|
kfree(bl);
|
|
|
|
goto err;
|
|
|
|
}
|
2020-02-24 02:41:33 +03:00
|
|
|
}
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
/* can't add buffers via this command for a mapped buffer ring */
|
|
|
|
if (bl->buf_nr_pages) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto err;
|
2020-02-24 02:41:33 +03:00
|
|
|
}
|
2022-03-18 02:20:10 +03:00
|
|
|
|
|
|
|
ret = io_add_buffers(ctx, p, bl);
|
|
|
|
err:
|
2020-02-24 02:41:33 +03:00
|
|
|
if (ret < 0)
|
2021-05-17 00:58:05 +03:00
|
|
|
req_set_fail(req);
|
2021-03-01 01:35:13 +03:00
|
|
|
/* complete before unlock, IOPOLL may need the lock */
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, ret, 0);
|
|
|
|
__io_req_complete(req, issue_flags);
|
2022-03-25 14:52:14 +03:00
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
2022-05-25 00:21:00 +03:00
|
|
|
return IOU_ISSUE_SKIP_COMPLETE;
|
2020-01-09 03:59:24 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 15:04:14 +03:00
|
|
|
static __maybe_unused int io_eopnotsupp_prep(struct io_kiocb *kiocb,
|
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
2022-05-24 01:49:31 +03:00
|
|
|
static int io_files_update_prep(struct io_kiocb *req,
|
2019-12-09 21:22:50 +03:00
|
|
|
const struct io_uring_sqe *sqe)
|
|
|
|
{
|
2022-05-24 19:05:49 +03:00
|
|
|
struct io_rsrc_update *up = io_kiocb_to_cmd(req);
|
|
|
|
|
2020-07-18 23:15:16 +03:00
|
|
|
if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
|
|
|
|
return -EINVAL;
|
2022-04-26 20:34:56 +03:00
|
|
|
if (sqe->rw_flags || sqe->splice_fd_in)
|
2019-12-09 21:22:50 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
2022-05-24 19:05:49 +03:00
|
|
|
up->offset = READ_ONCE(sqe->off);
|
|
|
|
up->nr_args = READ_ONCE(sqe->len);
|
|
|
|
if (!up->nr_args)
|
2019-12-09 21:22:50 +03:00
|
|
|
return -EINVAL;
|
2022-05-24 19:05:49 +03:00
|
|
|
up->arg = READ_ONCE(sqe->addr);
|
2019-12-09 21:22:50 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-05-30 16:15:20 +03:00
|
|
|
static int io_files_update_with_index_alloc(struct io_kiocb *req,
|
|
|
|
unsigned int issue_flags)
|
|
|
|
{
|
2022-05-24 19:05:49 +03:00
|
|
|
struct io_rsrc_update *up = io_kiocb_to_cmd(req);
|
|
|
|
__s32 __user *fds = u64_to_user_ptr(up->arg);
|
2022-05-30 16:15:20 +03:00
|
|
|
unsigned int done;
|
|
|
|
struct file *file;
|
|
|
|
int ret, fd;
|
|
|
|
|
2022-07-09 16:02:10 +03:00
|
|
|
if (!req->ctx->file_data)
|
|
|
|
return -ENXIO;
|
|
|
|
|
2022-05-24 19:05:49 +03:00
|
|
|
for (done = 0; done < up->nr_args; done++) {
|
2022-05-30 16:15:20 +03:00
|
|
|
if (copy_from_user(&fd, &fds[done], sizeof(fd))) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
file = fget(fd);
|
|
|
|
if (!file) {
|
|
|
|
ret = -EBADF;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
ret = io_fixed_fd_install(req, issue_flags, file,
|
|
|
|
IORING_FILE_INDEX_ALLOC);
|
|
|
|
if (ret < 0)
|
|
|
|
break;
|
|
|
|
if (copy_to_user(&fds[done], &ret, sizeof(ret))) {
|
|
|
|
__io_close_fixed(req, issue_flags, ret);
|
2022-06-11 15:22:20 +03:00
|
|
|
ret = -EFAULT;
|
2022-05-30 16:15:20 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (done)
|
|
|
|
return done;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-02-10 03:03:09 +03:00
|
|
|
static int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
|
2019-12-18 04:45:56 +03:00
|
|
|
{
|
2022-05-24 19:05:49 +03:00
|
|
|
struct io_rsrc_update *up = io_kiocb_to_cmd(req);
|
2019-12-18 04:45:56 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2022-05-24 19:05:49 +03:00
|
|
|
struct io_uring_rsrc_update2 up2;
|
2019-12-09 21:22:50 +03:00
|
|
|
int ret;
|
2019-12-18 04:45:56 +03:00
|
|
|
|
2022-05-24 19:05:49 +03:00
|
|
|
up2.offset = up->offset;
|
|
|
|
up2.data = up->arg;
|
|
|
|
up2.nr = 0;
|
|
|
|
up2.tags = 0;
|
|
|
|
up2.resv = 0;
|
|
|
|
up2.resv2 = 0;
|
2019-12-09 21:22:50 +03:00
|
|
|
|
2022-05-24 19:05:49 +03:00
|
|
|
if (up->offset == IORING_FILE_INDEX_ALLOC) {
|
2022-05-30 16:15:20 +03:00
|
|
|
ret = io_files_update_with_index_alloc(req, issue_flags);
|
|
|
|
} else {
|
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
|
|
|
ret = __io_register_rsrc_update(ctx, IORING_RSRC_FILE,
|
2022-05-24 19:05:49 +03:00
|
|
|
&up2, up->nr_args);
|
2022-05-30 16:15:20 +03:00
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
|
|
|
}
|
2019-12-09 21:22:50 +03:00
|
|
|
|
|
|
|
if (ret < 0)
|
2021-05-17 00:58:05 +03:00
|
|
|
req_set_fail(req);
|
2022-05-25 00:21:00 +03:00
|
|
|
io_req_set_res(req, ret, 0);
|
|
|
|
return IOU_OK;
|
2019-09-17 21:26:57 +03:00
|
|
|
}
|
|
|
|
|
2022-05-24 01:56:21 +03:00
|
|
|
static int io_req_prep_async(struct io_kiocb *req)
|
2019-12-02 21:03:47 +03:00
|
|
|
{
|
2022-05-24 01:56:21 +03:00
|
|
|
const struct io_op_def *def = &io_op_defs[req->opcode];
|
|
|
|
|
|
|
|
/* assign early for deferred execution for non-fixed file */
|
|
|
|
if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE))
|
|
|
|
req->file = io_file_get_normal(req, req->cqe.fd);
|
2022-05-24 02:30:37 +03:00
|
|
|
if (!def->prep_async)
|
2022-05-24 01:56:21 +03:00
|
|
|
return 0;
|
|
|
|
if (WARN_ON_ONCE(req_has_async_data(req)))
|
|
|
|
return -EFAULT;
|
|
|
|
if (io_alloc_async_data(req))
|
|
|
|
return -EAGAIN;
|
|
|
|
|
2022-05-24 02:30:37 +03:00
|
|
|
return def->prep_async(req);
|
2020-09-30 22:57:55 +03:00
|
|
|
}
|
|
|
|
|
2020-07-13 23:37:15 +03:00
|
|
|
static u32 io_get_sequence(struct io_kiocb *req)
|
|
|
|
{
|
2021-06-17 20:14:05 +03:00
|
|
|
u32 seq = req->ctx->cached_sq_head;
|
2022-03-25 14:52:16 +03:00
|
|
|
struct io_kiocb *cur;
|
2020-07-13 23:37:15 +03:00
|
|
|
|
2021-06-17 20:14:05 +03:00
|
|
|
/* need original cached_sq_head, but it was increased for each req */
|
2022-03-25 14:52:16 +03:00
|
|
|
io_for_each_link(cur, req)
|
2021-06-17 20:14:05 +03:00
|
|
|
seq--;
|
|
|
|
return seq;
|
2020-07-13 23:37:15 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_drain_req(struct io_kiocb *req)
|
2019-04-07 06:51:27 +03:00
|
|
|
{
|
2019-11-08 18:09:12 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-07-13 23:37:14 +03:00
|
|
|
struct io_defer_entry *de;
|
2019-12-02 21:03:47 +03:00
|
|
|
int ret;
|
2021-10-01 20:07:01 +03:00
|
|
|
u32 seq = io_get_sequence(req);
|
2021-06-15 18:47:57 +03:00
|
|
|
|
2019-11-13 13:06:25 +03:00
|
|
|
/* Still need defer if there is pending req in defer list. */
|
2021-11-25 12:21:02 +03:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-09-25 00:00:04 +03:00
|
|
|
if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list)) {
|
2021-11-25 12:21:02 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-10-01 20:07:01 +03:00
|
|
|
queue:
|
2021-06-15 18:47:56 +03:00
|
|
|
ctx->drain_active = false;
|
2021-10-01 20:07:01 +03:00
|
|
|
io_req_task_queue(req);
|
|
|
|
return;
|
2021-06-15 18:47:56 +03:00
|
|
|
}
|
2021-11-25 12:21:02 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-13 23:37:15 +03:00
|
|
|
|
2021-03-01 01:35:19 +03:00
|
|
|
ret = io_req_prep_async(req);
|
2021-10-01 20:07:01 +03:00
|
|
|
if (ret) {
|
|
|
|
fail:
|
|
|
|
io_req_complete_failed(req, ret);
|
|
|
|
return;
|
|
|
|
}
|
2020-06-29 19:18:43 +03:00
|
|
|
io_prep_async_link(req);
|
2020-07-13 23:37:14 +03:00
|
|
|
de = kmalloc(sizeof(*de), GFP_KERNEL);
|
2021-06-15 01:37:30 +03:00
|
|
|
if (!de) {
|
2021-07-12 00:41:13 +03:00
|
|
|
ret = -ENOMEM;
|
2021-10-01 20:07:01 +03:00
|
|
|
goto fail;
|
2021-06-15 01:37:30 +03:00
|
|
|
}
|
2019-12-04 21:08:05 +03:00
|
|
|
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-07-13 23:37:15 +03:00
|
|
|
if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-13 23:37:14 +03:00
|
|
|
kfree(de);
|
2021-10-01 20:07:01 +03:00
|
|
|
goto queue;
|
2019-04-07 06:51:27 +03:00
|
|
|
}
|
|
|
|
|
2022-04-12 17:09:43 +03:00
|
|
|
trace_io_uring_defer(ctx, req, req->cqe.user_data, req->opcode);
|
2020-07-13 23:37:14 +03:00
|
|
|
de->req = req;
|
2020-07-13 23:37:15 +03:00
|
|
|
de->seq = seq;
|
2020-07-13 23:37:14 +03:00
|
|
|
list_add_tail(&de->list, &ctx->defer_list);
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2019-04-07 06:51:27 +03:00
|
|
|
}
|
|
|
|
|
2021-03-19 20:22:41 +03:00
|
|
|
static void io_clean_op(struct io_kiocb *req)
|
2020-02-07 22:04:45 +03:00
|
|
|
{
|
2022-03-25 16:00:43 +03:00
|
|
|
if (req->flags & REQ_F_BUFFER_SELECTED) {
|
|
|
|
spin_lock(&req->ctx->completion_lock);
|
2022-03-09 03:46:52 +03:00
|
|
|
io_put_kbuf_comp(req);
|
2022-03-25 16:00:43 +03:00
|
|
|
spin_unlock(&req->ctx->completion_lock);
|
|
|
|
}
|
2020-02-07 22:04:45 +03:00
|
|
|
|
2020-07-16 23:28:02 +03:00
|
|
|
if (req->flags & REQ_F_NEED_CLEANUP) {
|
2022-05-24 19:26:28 +03:00
|
|
|
const struct io_op_def *def = &io_op_defs[req->opcode];
|
2022-05-24 18:59:28 +03:00
|
|
|
|
2022-05-24 19:26:28 +03:00
|
|
|
if (def->cleanup)
|
|
|
|
def->cleanup(req);
|
2020-02-07 22:04:45 +03:00
|
|
|
}
|
2021-04-15 18:52:40 +03:00
|
|
|
if ((req->flags & REQ_F_POLLED) && req->apoll) {
|
|
|
|
kfree(req->apoll->double_poll);
|
|
|
|
kfree(req->apoll);
|
|
|
|
req->apoll = NULL;
|
|
|
|
}
|
2022-06-02 08:57:02 +03:00
|
|
|
if (req->flags & REQ_F_INFLIGHT) {
|
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
|
|
|
|
|
|
|
atomic_dec(&tctx->inflight_tracked);
|
|
|
|
}
|
2021-06-17 20:14:04 +03:00
|
|
|
if (req->flags & REQ_F_CREDS)
|
2021-06-17 20:14:02 +03:00
|
|
|
put_cred(req->creds);
|
2021-10-04 22:02:56 +03:00
|
|
|
if (req->flags & REQ_F_ASYNC_DATA) {
|
|
|
|
kfree(req->async_data);
|
|
|
|
req->async_data = NULL;
|
|
|
|
}
|
2021-06-17 20:14:04 +03:00
|
|
|
req->flags &= ~IO_REQ_CLEAN_FLAGS;
|
2020-02-07 22:04:45 +03:00
|
|
|
}
|
|
|
|
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
static bool io_assign_file(struct io_kiocb *req, unsigned int issue_flags)
|
|
|
|
{
|
|
|
|
if (req->file || !io_op_defs[req->opcode].needs_file)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (req->flags & REQ_F_FIXED_FILE)
|
2022-04-12 17:09:43 +03:00
|
|
|
req->file = io_file_get_fixed(req, req->cqe.fd, issue_flags);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
else
|
2022-04-12 17:09:43 +03:00
|
|
|
req->file = io_file_get_normal(req, req->cqe.fd);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
|
2022-04-18 22:51:12 +03:00
|
|
|
return !!req->file;
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
}
|
|
|
|
|
2021-02-10 03:03:09 +03:00
|
|
|
static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2022-05-24 01:53:15 +03:00
|
|
|
const struct io_op_def *def = &io_op_defs[req->opcode];
|
2021-02-28 01:57:30 +03:00
|
|
|
const struct cred *creds = NULL;
|
2019-12-18 05:53:05 +03:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-04-15 05:23:40 +03:00
|
|
|
if (unlikely(!io_assign_file(req, issue_flags)))
|
|
|
|
return -EBADF;
|
|
|
|
|
2021-09-24 23:59:41 +03:00
|
|
|
if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
|
2021-06-17 20:14:01 +03:00
|
|
|
creds = override_creds(req->creds);
|
2021-02-28 01:57:30 +03:00
|
|
|
|
2022-05-24 01:53:15 +03:00
|
|
|
if (!def->audit_skip)
|
2021-02-17 03:46:48 +03:00
|
|
|
audit_uring_entry(req->opcode);
|
|
|
|
|
2022-05-24 01:56:21 +03:00
|
|
|
ret = def->issue(req, issue_flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-05-24 01:53:15 +03:00
|
|
|
if (!def->audit_skip)
|
2021-02-17 03:46:48 +03:00
|
|
|
audit_uring_exit(!ret, ret);
|
|
|
|
|
2021-02-28 01:57:30 +03:00
|
|
|
if (creds)
|
|
|
|
revert_creds(creds);
|
2022-05-25 00:21:00 +03:00
|
|
|
|
|
|
|
if (ret == IOU_OK)
|
|
|
|
__io_req_complete(req, issue_flags);
|
|
|
|
else if (ret != IOU_ISSUE_SKIP_COMPLETE)
|
2019-01-09 18:59:42 +03:00
|
|
|
return ret;
|
2022-05-25 00:21:00 +03:00
|
|
|
|
2020-05-20 06:20:27 +03:00
|
|
|
/* If the op doesn't have a file, we're not polling for it */
|
2021-10-15 19:09:11 +03:00
|
|
|
if ((req->ctx->flags & IORING_SETUP_IOPOLL) && req->file)
|
2021-10-15 19:09:12 +03:00
|
|
|
io_iopoll_req_issued(req, issue_flags);
|
2019-01-09 18:59:42 +03:00
|
|
|
|
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2022-05-26 05:31:09 +03:00
|
|
|
int io_poll_issue(struct io_kiocb *req, bool *locked)
|
|
|
|
{
|
|
|
|
io_tw_lock(req->ctx, locked);
|
|
|
|
if (unlikely(req->task->flags & PF_EXITING))
|
|
|
|
return -EFAULT;
|
|
|
|
return io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
|
|
|
|
}
|
|
|
|
|
2022-05-25 20:01:04 +03:00
|
|
|
struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
|
2021-08-09 15:04:05 +03:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
|
|
|
|
|
|
|
req = io_put_req_find_next(req);
|
|
|
|
return req ? &req->work : NULL;
|
|
|
|
}
|
|
|
|
|
2022-05-25 20:01:04 +03:00
|
|
|
void io_wq_submit_work(struct io_wq_work *work)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
const struct io_op_def *def = &io_op_defs[req->opcode];
|
2021-10-23 14:13:57 +03:00
|
|
|
unsigned int issue_flags = IO_URING_F_UNLOCKED;
|
|
|
|
bool needs_poll = false;
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
int ret = 0, err = -ECANCELED;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2021-08-15 12:40:18 +03:00
|
|
|
/* one will be dropped by ->io_free_work() after returning to io-wq */
|
|
|
|
if (!(req->flags & REQ_F_REFCOUNT))
|
|
|
|
__io_req_set_refcount(req, 2);
|
|
|
|
else
|
|
|
|
req_ref_get(req);
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 21:28:29 +03:00
|
|
|
|
2022-04-16 00:08:25 +03:00
|
|
|
io_arm_ltimeout(req);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
|
2021-08-23 15:30:44 +03:00
|
|
|
/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
|
2021-10-23 14:13:57 +03:00
|
|
|
if (work->flags & IO_WQ_WORK_CANCEL) {
|
2022-04-12 17:24:43 +03:00
|
|
|
fail:
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
io_req_task_queue_fail(req, err);
|
2021-10-23 14:13:57 +03:00
|
|
|
return;
|
|
|
|
}
|
2022-04-12 17:24:43 +03:00
|
|
|
if (!io_assign_file(req, issue_flags)) {
|
|
|
|
err = -EBADF;
|
|
|
|
work->flags |= IO_WQ_WORK_CANCEL;
|
|
|
|
goto fail;
|
|
|
|
}
|
2019-01-19 08:56:34 +03:00
|
|
|
|
2021-10-23 14:13:57 +03:00
|
|
|
if (req->flags & REQ_F_FORCE_ASYNC) {
|
2021-10-23 14:13:59 +03:00
|
|
|
bool opcode_poll = def->pollin || def->pollout;
|
|
|
|
|
|
|
|
if (opcode_poll && file_can_poll(req->file)) {
|
|
|
|
needs_poll = true;
|
2021-10-23 14:13:57 +03:00
|
|
|
issue_flags |= IO_URING_F_NONBLOCK;
|
2021-10-23 14:13:59 +03:00
|
|
|
}
|
2019-10-24 16:25:42 +03:00
|
|
|
}
|
2019-01-19 08:56:34 +03:00
|
|
|
|
2021-10-23 14:13:57 +03:00
|
|
|
do {
|
|
|
|
ret = io_issue_sqe(req, issue_flags);
|
|
|
|
if (ret != -EAGAIN)
|
|
|
|
break;
|
|
|
|
/*
|
|
|
|
* We can get EAGAIN for iopolled IO even though we're
|
|
|
|
* forcing a sync submission from here, since we can't
|
|
|
|
* wait for request slots on the block side.
|
|
|
|
*/
|
|
|
|
if (!needs_poll) {
|
2022-05-13 13:24:56 +03:00
|
|
|
if (!(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
break;
|
2021-10-23 14:13:57 +03:00
|
|
|
cond_resched();
|
|
|
|
continue;
|
io_uring: implement async hybrid mode for pollable requests
The current logic of requests with IOSQE_ASYNC is first queueing it to
io-worker, then execute it in a synchronous way. For unbound works like
pollable requests(e.g. read/write a socketfd), the io-worker may stuck
there waiting for events for a long time. And thus other works wait in
the list for a long time too.
Let's introduce a new way for unbound works (currently pollable
requests), with this a request will first be queued to io-worker, then
executed in a nonblock try rather than a synchronous way. Failure of
that leads it to arm poll stuff and then the worker can begin to handle
other works.
The detail process of this kind of requests is:
step1: original context:
queue it to io-worker
step2: io-worker context:
nonblock try(the old logic is a synchronous try here)
|
|--fail--> arm poll
|
|--(fail/ready)-->synchronous issue
|
|--(succeed)-->worker finish it's job, tw
take over the req
This works much better than the old IOSQE_ASYNC logic in cases where
unbound max_worker is relatively small. In this case, number of
io-worker eazily increments to max_worker, new worker cannot be created
and running workers stuck there handling old works in IOSQE_ASYNC mode.
In my 64-core machine, set unbound max_worker to 20, run echo-server,
turns out:
(arguments: register_file, connetion number is 1000, message size is 12
Byte)
original IOSQE_ASYNC: 76664.151 tps
after this patch: 166934.985 tps
Suggested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211018133445.103438-1-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-18 16:34:45 +03:00
|
|
|
}
|
|
|
|
|
2022-03-15 19:54:08 +03:00
|
|
|
if (io_arm_poll_handler(req, issue_flags) == IO_APOLL_OK)
|
2021-10-23 14:13:57 +03:00
|
|
|
return;
|
|
|
|
/* aborted or ready, in either case retry blocking */
|
|
|
|
needs_poll = false;
|
|
|
|
issue_flags &= ~IO_URING_F_NONBLOCK;
|
|
|
|
} while (1);
|
2019-01-19 08:56:34 +03:00
|
|
|
|
2021-02-19 01:32:52 +03:00
|
|
|
/* avoid locking problems by failing it from a clean context */
|
2022-05-25 00:21:00 +03:00
|
|
|
if (ret < 0)
|
2021-02-19 01:32:52 +03:00
|
|
|
io_req_task_queue_fail(req, ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 06:19:47 +03:00
|
|
|
inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
|
|
|
|
unsigned int issue_flags)
|
2019-03-13 21:39:28 +03:00
|
|
|
{
|
2022-04-05 02:18:43 +03:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct file *file = NULL;
|
2021-08-09 15:04:02 +03:00
|
|
|
unsigned long file_ptr;
|
2019-03-13 21:39:28 +03:00
|
|
|
|
2022-04-18 22:51:11 +03:00
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
2022-04-05 02:18:43 +03:00
|
|
|
|
2021-08-09 15:04:02 +03:00
|
|
|
if (unlikely((unsigned int)fd >= ctx->nr_user_files))
|
2022-04-05 02:18:43 +03:00
|
|
|
goto out;
|
2021-08-09 15:04:02 +03:00
|
|
|
fd = array_index_nospec(fd, ctx->nr_user_files);
|
|
|
|
file_ptr = io_fixed_file_slot(&ctx->file_table, fd)->file_ptr;
|
|
|
|
file = (struct file *) (file_ptr & FFS_MASK);
|
|
|
|
file_ptr &= ~FFS_MASK;
|
|
|
|
/* mask in overlapping REQ_F and FFS bits */
|
2021-10-17 02:07:09 +03:00
|
|
|
req->flags |= (file_ptr << REQ_F_SUPPORT_NOWAIT_BIT);
|
2022-04-05 02:18:43 +03:00
|
|
|
io_req_set_rsrc_node(req, ctx, 0);
|
2022-05-07 18:56:13 +03:00
|
|
|
WARN_ON_ONCE(file && !test_bit(fd, ctx->file_table.bitmap));
|
2022-04-05 02:18:43 +03:00
|
|
|
out:
|
2022-04-18 22:51:11 +03:00
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
2021-08-09 15:04:02 +03:00
|
|
|
return file;
|
|
|
|
}
|
2021-03-12 18:27:05 +03:00
|
|
|
|
2022-05-25 06:19:47 +03:00
|
|
|
struct file *io_file_get_normal(struct io_kiocb *req, int fd)
|
2021-08-09 15:04:02 +03:00
|
|
|
{
|
io_uring: remove file batch-get optimisation
For requests with non-fixed files, instead of grabbing just one
reference, we get by the number of left requests, so the following
requests using the same file can take it without atomics.
However, it's not all win. If there is one request in the middle
not using files or having a fixed file, we'll need to put back the left
references. Even worse if an application submits requests dealing with
different files, it will do a put for each new request, so doubling the
number of atomics needed. Also, even if not used, it's still takes some
cycles in the submission path.
If a file used many times, it rather makes sense to pre-register it, if
not, we may fall in the described pitfall. So, this optimisation is a
matter of use case. Go with the simpliest code-wise way, remove it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-10 16:52:47 +03:00
|
|
|
struct file *file = fget(fd);
|
2021-08-09 15:04:02 +03:00
|
|
|
|
2022-04-12 17:09:43 +03:00
|
|
|
trace_io_uring_file_get(req->ctx, req, req->cqe.user_data, fd);
|
2019-03-13 21:39:28 +03:00
|
|
|
|
2021-08-09 15:04:02 +03:00
|
|
|
/* we don't allow fixed io_uring files */
|
2022-05-25 19:28:04 +03:00
|
|
|
if (file && io_is_uring_fops(file))
|
2022-06-02 08:57:02 +03:00
|
|
|
io_req_track_inflight(req);
|
2020-10-10 20:34:08 +03:00
|
|
|
return file;
|
2019-03-13 21:39:28 +03:00
|
|
|
}
|
|
|
|
|
2022-04-16 00:08:28 +03:00
|
|
|
static void io_queue_async(struct io_kiocb *req, int ret)
|
2021-09-24 23:59:59 +03:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
|
|
|
{
|
2022-04-16 00:08:28 +03:00
|
|
|
struct io_kiocb *linked_timeout;
|
|
|
|
|
|
|
|
if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
|
|
|
|
io_req_complete_failed(req, ret);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
linked_timeout = io_prep_linked_timeout(req);
|
2021-09-24 23:59:59 +03:00
|
|
|
|
2022-03-15 19:54:08 +03:00
|
|
|
switch (io_arm_poll_handler(req, 0)) {
|
2021-09-24 23:59:59 +03:00
|
|
|
case IO_APOLL_READY:
|
|
|
|
io_req_task_queue(req);
|
|
|
|
break;
|
|
|
|
case IO_APOLL_ABORTED:
|
|
|
|
/*
|
|
|
|
* Queued up for async execution, worker will release
|
|
|
|
* submit reference when the iocb is actually submitted.
|
|
|
|
*/
|
2022-06-17 15:24:26 +03:00
|
|
|
io_kbuf_recycle(req, 0);
|
2022-04-16 00:08:27 +03:00
|
|
|
io_queue_iowq(req, NULL);
|
2021-09-24 23:59:59 +03:00
|
|
|
break;
|
2022-03-09 21:27:52 +03:00
|
|
|
case IO_APOLL_OK:
|
|
|
|
break;
|
2021-09-24 23:59:59 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (linked_timeout)
|
|
|
|
io_queue_linked_timeout(linked_timeout);
|
|
|
|
}
|
|
|
|
|
2022-04-16 00:08:26 +03:00
|
|
|
static inline void io_queue_sqe(struct io_kiocb *req)
|
2021-08-09 15:04:10 +03:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2019-03-12 19:18:47 +03:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2021-02-10 03:03:22 +03:00
|
|
|
ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
|
2020-02-23 09:22:19 +03:00
|
|
|
|
2021-10-04 22:02:48 +03:00
|
|
|
if (req->flags & REQ_F_COMPLETE_INLINE) {
|
|
|
|
io_req_add_compl_list(req);
|
2021-09-25 00:00:00 +03:00
|
|
|
return;
|
2021-10-04 22:02:48 +03:00
|
|
|
}
|
2019-10-17 18:20:46 +03:00
|
|
|
/*
|
|
|
|
* We async punt it if the file wasn't marked NOWAIT, or if the file
|
|
|
|
* doesn't support non-blocking read/write attempts
|
|
|
|
*/
|
2022-04-16 00:08:28 +03:00
|
|
|
if (likely(!ret))
|
2022-04-16 00:08:25 +03:00
|
|
|
io_arm_ltimeout(req);
|
2022-04-16 00:08:28 +03:00
|
|
|
else
|
|
|
|
io_queue_async(req, ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-09-24 23:59:58 +03:00
|
|
|
static void io_queue_sqe_fallback(struct io_kiocb *req)
|
2021-08-09 15:04:10 +03:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
2019-09-09 15:50:40 +03:00
|
|
|
{
|
2022-04-16 00:08:32 +03:00
|
|
|
if (unlikely(req->flags & REQ_F_FAIL)) {
|
|
|
|
/*
|
|
|
|
* We don't submit, fail them all, for that replace hardlinks
|
|
|
|
* with normal links. Extra REQ_F_LINK is tolerated.
|
|
|
|
*/
|
|
|
|
req->flags &= ~REQ_F_HARDLINK;
|
|
|
|
req->flags |= REQ_F_LINK;
|
|
|
|
io_req_complete_failed(req, req->cqe.res);
|
2021-10-01 20:07:01 +03:00
|
|
|
} else if (unlikely(req->ctx->drain_active)) {
|
|
|
|
io_drain_req(req);
|
2021-06-15 01:37:30 +03:00
|
|
|
} else {
|
|
|
|
int ret = io_req_prep_async(req);
|
|
|
|
|
|
|
|
if (unlikely(ret))
|
|
|
|
io_req_complete_failed(req, ret);
|
|
|
|
else
|
2022-04-16 00:08:27 +03:00
|
|
|
io_queue_iowq(req, NULL);
|
2019-12-17 18:04:44 +03:00
|
|
|
}
|
2019-09-09 15:50:40 +03:00
|
|
|
}
|
|
|
|
|
2021-02-18 21:29:40 +03:00
|
|
|
/*
|
|
|
|
* Check SQE restrictions (opcode and flags).
|
|
|
|
*
|
|
|
|
* Returns 'true' if SQE is allowed, 'false' otherwise.
|
|
|
|
*/
|
|
|
|
static inline bool io_check_restriction(struct io_ring_ctx *ctx,
|
|
|
|
struct io_kiocb *req,
|
|
|
|
unsigned int sqe_flags)
|
2019-09-09 15:50:40 +03:00
|
|
|
{
|
2021-02-18 21:29:40 +03:00
|
|
|
if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
|
|
|
|
ctx->restrictions.sqe_flags_required)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
|
|
|
|
ctx->restrictions.sqe_flags_required))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
2019-09-09 15:50:40 +03:00
|
|
|
}
|
|
|
|
|
2021-10-01 20:07:00 +03:00
|
|
|
static void io_init_req_drain(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_kiocb *head = ctx->submit_state.link.head;
|
|
|
|
|
|
|
|
ctx->drain_active = true;
|
|
|
|
if (head) {
|
|
|
|
/*
|
|
|
|
* If we need to drain a request in the middle of a link, drain
|
|
|
|
* the head request and the next request/link after the current
|
|
|
|
* link. Considering sequential execution of links,
|
2021-11-25 12:21:03 +03:00
|
|
|
* REQ_F_IO_DRAIN will be maintained for every request of our
|
2021-10-01 20:07:00 +03:00
|
|
|
* link.
|
|
|
|
*/
|
2021-11-25 12:21:03 +03:00
|
|
|
head->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
|
2021-10-01 20:07:00 +03:00
|
|
|
ctx->drain_next = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-18 21:29:40 +03:00
|
|
|
static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
2021-08-09 15:04:10 +03:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-18 21:29:40 +03:00
|
|
|
{
|
2022-05-24 01:53:15 +03:00
|
|
|
const struct io_op_def *def;
|
2021-02-18 21:29:40 +03:00
|
|
|
unsigned int sqe_flags;
|
2021-10-01 20:07:02 +03:00
|
|
|
int personality;
|
2021-10-06 18:06:49 +03:00
|
|
|
u8 opcode;
|
2021-02-18 21:29:40 +03:00
|
|
|
|
2021-08-09 15:04:08 +03:00
|
|
|
/* req is partially pre-initialised, see io_preinit_req() */
|
2021-10-06 18:06:49 +03:00
|
|
|
req->opcode = opcode = READ_ONCE(sqe->opcode);
|
2021-02-18 21:29:40 +03:00
|
|
|
/* same numerical values with corresponding REQ_F_*, safe to copy */
|
|
|
|
req->flags = sqe_flags = READ_ONCE(sqe->flags);
|
2022-04-12 17:09:43 +03:00
|
|
|
req->cqe.user_data = READ_ONCE(sqe->user_data);
|
2021-02-18 21:29:40 +03:00
|
|
|
req->file = NULL;
|
2022-04-18 22:51:13 +03:00
|
|
|
req->rsrc_node = NULL;
|
2021-02-18 21:29:40 +03:00
|
|
|
req->task = current;
|
|
|
|
|
2021-10-06 18:06:49 +03:00
|
|
|
if (unlikely(opcode >= IORING_OP_LAST)) {
|
|
|
|
req->opcode = 0;
|
2021-02-18 21:29:40 +03:00
|
|
|
return -EINVAL;
|
2021-10-06 18:06:49 +03:00
|
|
|
}
|
2022-05-24 01:53:15 +03:00
|
|
|
def = &io_op_defs[opcode];
|
2021-09-15 14:03:38 +03:00
|
|
|
if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) {
|
|
|
|
/* enforce forwards compatibility on users */
|
|
|
|
if (sqe_flags & ~SQE_VALID_FLAGS)
|
|
|
|
return -EINVAL;
|
2022-04-29 04:09:43 +03:00
|
|
|
if (sqe_flags & IOSQE_BUFFER_SELECT) {
|
2022-05-24 01:53:15 +03:00
|
|
|
if (!def->buffer_select)
|
2022-04-29 04:09:43 +03:00
|
|
|
return -EOPNOTSUPP;
|
|
|
|
req->buf_index = READ_ONCE(sqe->buf_group);
|
|
|
|
}
|
2021-11-10 18:49:34 +03:00
|
|
|
if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS)
|
|
|
|
ctx->drain_disabled = true;
|
|
|
|
if (sqe_flags & IOSQE_IO_DRAIN) {
|
|
|
|
if (ctx->drain_disabled)
|
|
|
|
return -EOPNOTSUPP;
|
2021-10-01 20:07:00 +03:00
|
|
|
io_init_req_drain(req);
|
2021-11-10 18:49:34 +03:00
|
|
|
}
|
2021-09-24 23:59:57 +03:00
|
|
|
}
|
|
|
|
if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
|
|
|
|
if (ctx->restricted && !io_check_restriction(ctx, req, sqe_flags))
|
|
|
|
return -EACCES;
|
|
|
|
/* knock it to the slow queue path, will be drained there */
|
|
|
|
if (ctx->drain_active)
|
|
|
|
req->flags |= REQ_F_FORCE_ASYNC;
|
|
|
|
/* if there is no link, we're at "next" request and need to drain */
|
|
|
|
if (unlikely(ctx->drain_next) && !ctx->submit_state.link.head) {
|
|
|
|
ctx->drain_next = false;
|
|
|
|
ctx->drain_active = true;
|
2021-11-25 12:21:03 +03:00
|
|
|
req->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
|
2021-09-24 23:59:57 +03:00
|
|
|
}
|
2021-09-15 14:03:38 +03:00
|
|
|
}
|
2021-02-18 21:29:40 +03:00
|
|
|
|
2022-05-24 01:53:15 +03:00
|
|
|
if (!def->ioprio && sqe->ioprio)
|
2022-04-26 20:34:56 +03:00
|
|
|
return -EINVAL;
|
2022-05-24 01:53:15 +03:00
|
|
|
if (!def->iopoll && (ctx->flags & IORING_SETUP_IOPOLL))
|
2022-04-26 20:34:56 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
2022-05-24 01:53:15 +03:00
|
|
|
if (def->needs_file) {
|
2021-10-06 18:06:46 +03:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
|
|
|
|
2022-04-12 17:09:43 +03:00
|
|
|
req->cqe.fd = READ_ONCE(sqe->fd);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 19:10:08 +03:00
|
|
|
|
2021-10-06 18:06:46 +03:00
|
|
|
/*
|
|
|
|
* Plug now if we have more than 2 IO left after this, and the
|
|
|
|
* target is potentially a read/write to block based storage.
|
|
|
|
*/
|
2022-05-24 01:53:15 +03:00
|
|
|
if (state->need_plug && def->plug) {
|
2021-10-06 18:06:46 +03:00
|
|
|
state->plug_started = true;
|
|
|
|
state->need_plug = false;
|
2021-10-06 20:01:42 +03:00
|
|
|
blk_start_plug_nr_ios(&state->plug, state->submit_nr);
|
2021-10-06 18:06:46 +03:00
|
|
|
}
|
2021-02-18 21:29:40 +03:00
|
|
|
}
|
2020-10-28 02:25:35 +03:00
|
|
|
|
2021-03-06 19:22:27 +03:00
|
|
|
personality = READ_ONCE(sqe->personality);
|
|
|
|
if (personality) {
|
2021-11-02 07:06:18 +03:00
|
|
|
int ret;
|
|
|
|
|
2021-06-17 20:14:01 +03:00
|
|
|
req->creds = xa_load(&ctx->personalities, personality);
|
|
|
|
if (!req->creds)
|
2021-03-06 19:22:27 +03:00
|
|
|
return -EINVAL;
|
2021-06-17 20:14:01 +03:00
|
|
|
get_cred(req->creds);
|
lsm,io_uring: add LSM hooks to io_uring
A full expalantion of io_uring is beyond the scope of this commit
description, but in summary it is an asynchronous I/O mechanism
which allows for I/O requests and the resulting data to be queued
in memory mapped "rings" which are shared between the kernel and
userspace. Optionally, io_uring offers the ability for applications
to spawn kernel threads to dequeue I/O requests from the ring and
submit the requests in the kernel, helping to minimize the syscall
overhead. Rings are accessed in userspace by memory mapping a file
descriptor provided by the io_uring_setup(2), and can be shared
between applications as one might do with any open file descriptor.
Finally, process credentials can be registered with a given ring
and any process with access to that ring can submit I/O requests
using any of the registered credentials.
While the io_uring functionality is widely recognized as offering a
vastly improved, and high performing asynchronous I/O mechanism, its
ability to allow processes to submit I/O requests with credentials
other than its own presents a challenge to LSMs. When a process
creates a new io_uring ring the ring's credentials are inhertied
from the calling process; if this ring is shared with another
process operating with different credentials there is the potential
to bypass the LSMs security policy. Similarly, registering
credentials with a given ring allows any process with access to that
ring to submit I/O requests with those credentials.
In an effort to allow LSMs to apply security policy to io_uring I/O
operations, this patch adds two new LSM hooks. These hooks, in
conjunction with the LSM anonymous inode support previously
submitted, allow an LSM to apply access control policy to the
sharing of io_uring rings as well as any io_uring credential changes
requested by a process.
The new LSM hooks are described below:
* int security_uring_override_creds(cred)
Controls if the current task, executing an io_uring operation,
is allowed to override it's credentials with @cred. In cases
where the current task is a user application, the current
credentials will be those of the user application. In cases
where the current task is a kernel thread servicing io_uring
requests the current credentials will be those of the io_uring
ring (inherited from the process that created the ring).
* int security_uring_sqpoll(void)
Controls if the current task is allowed to create an io_uring
polling thread (IORING_SETUP_SQPOLL). Without a SQPOLL thread
in the kernel processes must submit I/O requests via
io_uring_enter(2) which allows us to compare any requested
credential changes against the application making the request.
With a SQPOLL thread, we can no longer compare requested
credential changes against the application making the request,
the comparison is made against the ring's credentials.
Signed-off-by: Paul Moore <paul@paul-moore.com>
2021-02-02 03:56:49 +03:00
|
|
|
ret = security_uring_override_creds(req->creds);
|
|
|
|
if (ret) {
|
|
|
|
put_cred(req->creds);
|
|
|
|
return ret;
|
|
|
|
}
|
2021-06-17 20:14:02 +03:00
|
|
|
req->flags |= REQ_F_CREDS;
|
2021-03-06 19:22:27 +03:00
|
|
|
}
|
2021-02-18 21:29:40 +03:00
|
|
|
|
2022-05-24 01:56:21 +03:00
|
|
|
return def->prep(req, sqe);
|
2021-02-18 21:29:40 +03:00
|
|
|
}
|
|
|
|
|
2022-04-16 00:08:30 +03:00
|
|
|
static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
|
|
|
|
struct io_kiocb *req, int ret)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_submit_link *link = &ctx->submit_state.link;
|
|
|
|
struct io_kiocb *head = link->head;
|
|
|
|
|
|
|
|
trace_io_uring_req_failed(sqe, ctx, req, ret);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Avoid breaking links in the middle as it renders links with SQPOLL
|
|
|
|
* unusable. Instead of failing eagerly, continue assembling the link if
|
|
|
|
* applicable and mark the head with REQ_F_FAIL. The link flushing code
|
|
|
|
* should find the flag and handle the rest.
|
|
|
|
*/
|
|
|
|
req_fail_link_node(req, ret);
|
|
|
|
if (head && !(head->flags & REQ_F_FAIL))
|
|
|
|
req_fail_link_node(head, -ECANCELED);
|
|
|
|
|
|
|
|
if (!(req->flags & IO_REQ_LINK_FLAGS)) {
|
|
|
|
if (head) {
|
|
|
|
link->last->link = req;
|
|
|
|
link->head = NULL;
|
|
|
|
req = head;
|
|
|
|
}
|
|
|
|
io_queue_sqe_fallback(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (head)
|
|
|
|
link->last->link = req;
|
|
|
|
else
|
|
|
|
link->head = req;
|
|
|
|
link->last = req;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
2021-02-18 21:29:42 +03:00
|
|
|
const struct io_uring_sqe *sqe)
|
2021-08-09 15:04:10 +03:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2019-05-11 01:07:28 +03:00
|
|
|
{
|
2021-02-18 21:29:42 +03:00
|
|
|
struct io_submit_link *link = &ctx->submit_state.link;
|
2020-04-12 02:05:05 +03:00
|
|
|
int ret;
|
2019-05-11 01:07:28 +03:00
|
|
|
|
2021-02-18 21:29:41 +03:00
|
|
|
ret = io_init_req(ctx, req, sqe);
|
2022-04-16 00:08:30 +03:00
|
|
|
if (unlikely(ret))
|
|
|
|
return io_submit_fail_init(sqe, req, ret);
|
2021-06-15 01:37:31 +03:00
|
|
|
|
2021-02-18 21:29:45 +03:00
|
|
|
/* don't need @sqe from now on */
|
2022-04-12 17:09:43 +03:00
|
|
|
trace_io_uring_submit_sqe(ctx, req, req->cqe.user_data, req->opcode,
|
2021-05-31 09:36:37 +03:00
|
|
|
req->flags, true,
|
|
|
|
ctx->flags & IORING_SETUP_SQPOLL);
|
2021-02-18 21:29:41 +03:00
|
|
|
|
2019-05-11 01:07:28 +03:00
|
|
|
/*
|
|
|
|
* If we already have a head request, queue this one for async
|
|
|
|
* submittal once the head completes. If we don't have a head but
|
|
|
|
* IOSQE_IO_LINK is set in the sqe, start a new head. This one will be
|
|
|
|
* submitted sync once the chain is complete. If none of those
|
|
|
|
* conditions are true (normal request), then just queue it.
|
|
|
|
*/
|
2022-04-16 00:08:31 +03:00
|
|
|
if (unlikely(link->head)) {
|
2022-04-16 00:08:30 +03:00
|
|
|
ret = io_req_prep_async(req);
|
|
|
|
if (unlikely(ret))
|
|
|
|
return io_submit_fail_init(sqe, req, ret);
|
|
|
|
|
|
|
|
trace_io_uring_link(ctx, req, link->head);
|
2020-10-28 02:25:37 +03:00
|
|
|
link->last->link = req;
|
2020-10-28 02:25:35 +03:00
|
|
|
link->last = req;
|
2019-12-17 22:26:58 +03:00
|
|
|
|
2022-04-16 00:08:29 +03:00
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS)
|
2021-09-24 23:59:56 +03:00
|
|
|
return 0;
|
2022-04-16 00:08:30 +03:00
|
|
|
/* last request of the link, flush it */
|
|
|
|
req = link->head;
|
2021-09-24 23:59:56 +03:00
|
|
|
link->head = NULL;
|
2022-04-16 00:08:31 +03:00
|
|
|
if (req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))
|
|
|
|
goto fallback;
|
|
|
|
|
|
|
|
} else if (unlikely(req->flags & (IO_REQ_LINK_FLAGS |
|
|
|
|
REQ_F_FORCE_ASYNC | REQ_F_FAIL))) {
|
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS) {
|
|
|
|
link->head = req;
|
|
|
|
link->last = req;
|
|
|
|
} else {
|
|
|
|
fallback:
|
|
|
|
io_queue_sqe_fallback(req);
|
|
|
|
}
|
2021-09-24 23:59:56 +03:00
|
|
|
return 0;
|
2019-05-11 01:07:28 +03:00
|
|
|
}
|
2019-12-05 16:15:45 +03:00
|
|
|
|
2021-09-24 23:59:56 +03:00
|
|
|
io_queue_sqe(req);
|
2020-04-12 02:05:03 +03:00
|
|
|
return 0;
|
2019-05-11 01:07:28 +03:00
|
|
|
}
|
|
|
|
|
2019-01-09 19:06:50 +03:00
|
|
|
/*
|
|
|
|
* Batched submission is done, ensure local IO is flushed out.
|
|
|
|
*/
|
2021-09-24 23:59:55 +03:00
|
|
|
static void io_submit_state_end(struct io_ring_ctx *ctx)
|
2019-01-09 19:06:50 +03:00
|
|
|
{
|
2021-09-24 23:59:55 +03:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
|
|
|
|
2022-04-12 17:09:45 +03:00
|
|
|
if (unlikely(state->link.head))
|
|
|
|
io_queue_sqe_fallback(state->link.head);
|
2021-09-24 23:59:55 +03:00
|
|
|
/* flush only after queuing links as they can generate completions */
|
2021-09-08 18:40:52 +03:00
|
|
|
io_submit_flush_completions(ctx);
|
2020-10-28 18:33:23 +03:00
|
|
|
if (state->plug_started)
|
|
|
|
blk_finish_plug(&state->plug);
|
2019-01-09 19:06:50 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Start submission side cache.
|
|
|
|
*/
|
|
|
|
static void io_submit_state_start(struct io_submit_state *state,
|
2021-02-10 03:03:11 +03:00
|
|
|
unsigned int max_ios)
|
2019-01-09 19:06:50 +03:00
|
|
|
{
|
2020-10-28 18:33:23 +03:00
|
|
|
state->plug_started = false;
|
2021-09-08 18:40:49 +03:00
|
|
|
state->need_plug = max_ios > 2;
|
2021-10-06 20:01:42 +03:00
|
|
|
state->submit_nr = max_ios;
|
2021-02-18 21:29:42 +03:00
|
|
|
/* set only head, no need to init link_last in advance */
|
|
|
|
state->link.head = NULL;
|
2019-01-09 19:06:50 +03:00
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
static void io_commit_sqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2019-08-26 20:23:46 +03:00
|
|
|
struct io_rings *rings = ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2019-12-30 21:24:46 +03:00
|
|
|
/*
|
|
|
|
* Ensure any loads from the SQEs are done at this point,
|
|
|
|
* since once we write the new head, the application could
|
|
|
|
* write new data to them.
|
|
|
|
*/
|
|
|
|
smp_store_release(&rings->sq.head, ctx->cached_sq_head);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2021-06-04 19:42:56 +03:00
|
|
|
* Fetch an sqe, if one is available. Note this returns a pointer to memory
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
* that is mapped by userspace. This means that care needs to be taken to
|
|
|
|
* ensure that reads are stable, as we cannot rely on userspace always
|
|
|
|
* being a good citizen. If members of the sqe are validated and then later
|
|
|
|
* used, it's important that those reads are done through READ_ONCE() to
|
|
|
|
* prevent a re-load down the line.
|
|
|
|
*/
|
2020-04-08 08:58:43 +03:00
|
|
|
static const struct io_uring_sqe *io_get_sqe(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-05-17 00:58:09 +03:00
|
|
|
unsigned head, mask = ctx->sq_entries - 1;
|
2021-06-15 01:37:23 +03:00
|
|
|
unsigned sq_idx = ctx->cached_sq_head++ & mask;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The cached sq head (or cq tail) serves two purposes:
|
|
|
|
*
|
|
|
|
* 1) allows us to batch the cost of updating the user visible
|
|
|
|
* head updates.
|
|
|
|
* 2) allows the kernel side to track the head on its own, even
|
|
|
|
* though the application is the one updating it.
|
|
|
|
*/
|
2021-06-15 01:37:23 +03:00
|
|
|
head = READ_ONCE(ctx->sq_array[sq_idx]);
|
2022-04-01 04:27:52 +03:00
|
|
|
if (likely(head < ctx->sq_entries)) {
|
|
|
|
/* double index for 128-byte SQEs, twice as long */
|
|
|
|
if (ctx->flags & IORING_SETUP_SQE128)
|
|
|
|
head <<= 1;
|
2020-04-08 08:58:43 +03:00
|
|
|
return &ctx->sq_sqes[head];
|
2022-04-01 04:27:52 +03:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
/* drop invalid entries */
|
2021-06-15 01:37:24 +03:00
|
|
|
ctx->cq_extra--;
|
|
|
|
WRITE_ONCE(ctx->rings->sq_dropped,
|
|
|
|
READ_ONCE(ctx->rings->sq_dropped) + 1);
|
2020-04-08 08:58:43 +03:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2022-05-25 18:13:39 +03:00
|
|
|
int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
|
2021-08-09 15:04:10 +03:00
|
|
|
__must_hold(&ctx->uring_lock)
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
{
|
2021-09-25 00:00:01 +03:00
|
|
|
unsigned int entries = io_sqring_entries(ctx);
|
2022-04-12 17:09:49 +03:00
|
|
|
unsigned int left;
|
|
|
|
int ret;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
|
2021-10-04 22:02:47 +03:00
|
|
|
if (unlikely(!entries))
|
2021-09-25 00:00:01 +03:00
|
|
|
return 0;
|
2019-12-30 21:24:45 +03:00
|
|
|
/* make sure SQ entry isn't read before tail */
|
2022-04-12 17:09:49 +03:00
|
|
|
ret = left = min3(nr, ctx->sq_entries, entries);
|
|
|
|
io_get_task_refs(left);
|
|
|
|
io_submit_state_start(&ctx->submit_state, left);
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
|
2021-09-25 00:00:01 +03:00
|
|
|
do {
|
2019-12-20 04:24:38 +03:00
|
|
|
const struct io_uring_sqe *sqe;
|
2019-11-07 01:41:06 +03:00
|
|
|
struct io_kiocb *req;
|
2019-10-25 12:31:30 +03:00
|
|
|
|
2022-04-12 17:09:49 +03:00
|
|
|
if (unlikely(!io_alloc_req_refill(ctx)))
|
2019-10-25 12:31:30 +03:00
|
|
|
break;
|
2021-10-04 22:02:49 +03:00
|
|
|
req = io_alloc_req(ctx);
|
2021-02-12 14:55:17 +03:00
|
|
|
sqe = io_get_sqe(ctx);
|
|
|
|
if (unlikely(!sqe)) {
|
2022-04-12 17:09:48 +03:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-02-12 14:55:17 +03:00
|
|
|
break;
|
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
|
2022-04-12 17:09:50 +03:00
|
|
|
/*
|
|
|
|
* Continue submitting even for sqe failure if the
|
|
|
|
* ring was setup with IORING_SETUP_SUBMIT_ALL
|
|
|
|
*/
|
|
|
|
if (unlikely(io_submit_sqe(ctx, req, sqe)) &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SUBMIT_ALL)) {
|
|
|
|
left--;
|
|
|
|
break;
|
2022-03-10 22:59:35 +03:00
|
|
|
}
|
2022-04-12 17:09:50 +03:00
|
|
|
} while (--left);
|
2020-01-25 22:34:01 +03:00
|
|
|
|
2022-04-12 17:09:49 +03:00
|
|
|
if (unlikely(left)) {
|
|
|
|
ret -= left;
|
|
|
|
/* try again if it submitted nothing and can't allocate a req */
|
|
|
|
if (!ret && io_req_cache_empty(ctx))
|
|
|
|
ret = -EAGAIN;
|
|
|
|
current->io_uring->cached_refs += left;
|
2020-01-25 22:34:01 +03:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
|
2021-09-24 23:59:55 +03:00
|
|
|
io_submit_state_end(ctx);
|
2019-11-06 00:22:14 +03:00
|
|
|
/* Commit SQ ring head once we've consumed and submitted all SQEs */
|
|
|
|
io_commit_sqring(ctx);
|
2022-04-12 17:09:49 +03:00
|
|
|
return ret;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
}
|
|
|
|
|
2019-09-24 22:47:15 +03:00
|
|
|
struct io_wait_queue {
|
|
|
|
struct wait_queue_entry wq;
|
|
|
|
struct io_ring_ctx *ctx;
|
2021-08-06 23:04:31 +03:00
|
|
|
unsigned cq_tail;
|
2019-09-24 22:47:15 +03:00
|
|
|
unsigned nr_timeouts;
|
|
|
|
};
|
|
|
|
|
2021-01-04 23:36:36 +03:00
|
|
|
static inline bool io_should_wake(struct io_wait_queue *iowq)
|
2019-09-24 22:47:15 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = iowq->ctx;
|
2021-08-06 23:04:31 +03:00
|
|
|
int dist = ctx->cached_cq_tail - (int) iowq->cq_tail;
|
2019-09-24 22:47:15 +03:00
|
|
|
|
|
|
|
/*
|
2019-12-13 14:09:50 +03:00
|
|
|
* Wake up if we have enough events, or if a timeout occurred since we
|
2019-09-24 22:47:15 +03:00
|
|
|
* started waiting. For timeouts, we always want to return to userspace,
|
|
|
|
* regardless of event count.
|
|
|
|
*/
|
2021-08-06 23:04:31 +03:00
|
|
|
return dist >= 0 || atomic_read(&ctx->cq_timeouts) != iowq->nr_timeouts;
|
2019-09-24 22:47:15 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
|
|
|
|
int wake_flags, void *key)
|
|
|
|
{
|
|
|
|
struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue,
|
|
|
|
wq);
|
|
|
|
|
2021-01-04 23:36:36 +03:00
|
|
|
/*
|
|
|
|
* Cannot safely flush overflowed CQEs from here, ensure we wake up
|
|
|
|
* the task, and the next invocation will do it.
|
|
|
|
*/
|
2022-04-21 12:13:43 +03:00
|
|
|
if (io_should_wake(iowq) ||
|
|
|
|
test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &iowq->ctx->check_cq))
|
2021-01-04 23:36:36 +03:00
|
|
|
return autoremove_wake_function(curr, mode, wake_flags, key);
|
|
|
|
return -1;
|
2019-09-24 22:47:15 +03:00
|
|
|
}
|
|
|
|
|
2020-09-24 22:32:18 +03:00
|
|
|
static int io_run_task_work_sig(void)
|
|
|
|
{
|
|
|
|
if (io_run_task_work())
|
|
|
|
return 1;
|
2021-03-21 23:16:08 +03:00
|
|
|
if (test_thread_flag(TIF_NOTIFY_SIGNAL))
|
2020-10-23 05:17:18 +03:00
|
|
|
return -ERESTARTSYS;
|
2022-02-16 22:53:42 +03:00
|
|
|
if (task_sigpending(current))
|
|
|
|
return -EINTR;
|
|
|
|
return 0;
|
2020-09-24 22:32:18 +03:00
|
|
|
}
|
|
|
|
|
2021-02-04 16:51:58 +03:00
|
|
|
/* when returns >0, the caller should retry */
|
|
|
|
static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
|
|
|
|
struct io_wait_queue *iowq,
|
2022-02-21 15:49:30 +03:00
|
|
|
ktime_t timeout)
|
2021-02-04 16:51:58 +03:00
|
|
|
{
|
|
|
|
int ret;
|
2022-04-21 12:13:44 +03:00
|
|
|
unsigned long check_cq;
|
2021-02-04 16:51:58 +03:00
|
|
|
|
|
|
|
/* make sure we run task_work before checking for signals */
|
|
|
|
ret = io_run_task_work_sig();
|
|
|
|
if (ret || io_should_wake(iowq))
|
|
|
|
return ret;
|
2022-04-21 12:13:44 +03:00
|
|
|
check_cq = READ_ONCE(ctx->check_cq);
|
2021-02-04 16:51:58 +03:00
|
|
|
/* let the caller flush overflows, retry */
|
2022-04-21 12:13:44 +03:00
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
|
2021-02-04 16:51:58 +03:00
|
|
|
return 1;
|
2022-04-21 12:13:44 +03:00
|
|
|
if (unlikely(check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)))
|
|
|
|
return -EBADR;
|
2022-02-21 15:49:30 +03:00
|
|
|
if (!schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS))
|
|
|
|
return -ETIME;
|
|
|
|
return 1;
|
2021-02-04 16:51:58 +03:00
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
/*
|
|
|
|
* Wait until events become available, if we don't already have some. The
|
|
|
|
* application must reap them itself, as they reside on the shared cq ring.
|
|
|
|
*/
|
|
|
|
static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
|
2020-11-03 05:54:37 +03:00
|
|
|
const sigset_t __user *sig, size_t sigsz,
|
|
|
|
struct __kernel_timespec __user *uts)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-08-09 18:07:32 +03:00
|
|
|
struct io_wait_queue iowq;
|
2019-08-26 20:23:46 +03:00
|
|
|
struct io_rings *rings = ctx->rings;
|
2022-02-21 15:49:30 +03:00
|
|
|
ktime_t timeout = KTIME_MAX;
|
2021-02-04 16:51:57 +03:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 19:52:41 +03:00
|
|
|
do {
|
2021-08-09 22:18:12 +03:00
|
|
|
io_cqring_overflow_flush(ctx);
|
2021-01-04 23:36:36 +03:00
|
|
|
if (io_cqring_events(ctx) >= min_events)
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 19:52:41 +03:00
|
|
|
return 0;
|
2020-07-01 20:29:10 +03:00
|
|
|
if (!io_run_task_work())
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 19:52:41 +03:00
|
|
|
break;
|
|
|
|
} while (1);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
if (sig) {
|
2019-03-25 17:34:53 +03:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (in_compat_syscall())
|
|
|
|
ret = set_compat_user_sigmask((const compat_sigset_t __user *)sig,
|
2019-07-17 02:29:53 +03:00
|
|
|
sigsz);
|
2019-03-25 17:34:53 +03:00
|
|
|
else
|
|
|
|
#endif
|
2019-07-17 02:29:53 +03:00
|
|
|
ret = set_user_sigmask(sig, sigsz);
|
2019-03-25 17:34:53 +03:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2022-03-09 01:17:21 +03:00
|
|
|
if (uts) {
|
|
|
|
struct timespec64 ts;
|
|
|
|
|
|
|
|
if (get_timespec64(&ts, uts))
|
|
|
|
return -EFAULT;
|
|
|
|
timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
|
|
|
|
}
|
|
|
|
|
2021-08-09 18:07:32 +03:00
|
|
|
init_waitqueue_func_entry(&iowq.wq, io_wake_function);
|
|
|
|
iowq.wq.private = current;
|
|
|
|
INIT_LIST_HEAD(&iowq.wq.entry);
|
|
|
|
iowq.ctx = ctx;
|
2019-09-24 22:47:15 +03:00
|
|
|
iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
|
2021-08-06 23:04:31 +03:00
|
|
|
iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
|
2021-08-09 18:07:32 +03:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
|
|
|
trace_io_uring_cqring_wait(ctx, min_events);
|
2019-09-24 22:47:15 +03:00
|
|
|
do {
|
2021-03-05 03:15:48 +03:00
|
|
|
/* if we can't even flush overflow, don't wait for more */
|
2021-08-09 22:18:12 +03:00
|
|
|
if (!io_cqring_overflow_flush(ctx)) {
|
2021-03-05 03:15:48 +03:00
|
|
|
ret = -EBUSY;
|
|
|
|
break;
|
|
|
|
}
|
2021-06-15 01:37:28 +03:00
|
|
|
prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
|
2019-09-24 22:47:15 +03:00
|
|
|
TASK_INTERRUPTIBLE);
|
2022-02-21 15:49:30 +03:00
|
|
|
ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
|
2021-03-05 03:15:48 +03:00
|
|
|
cond_resched();
|
2021-02-04 16:51:58 +03:00
|
|
|
} while (ret > 0);
|
2019-09-24 22:47:15 +03:00
|
|
|
|
2022-03-26 01:39:57 +03:00
|
|
|
finish_wait(&ctx->cq_wait, &iowq.wq);
|
2020-07-04 17:55:50 +03:00
|
|
|
restore_saved_sigmask_unless(ret == -EINTR);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2019-08-26 20:23:46 +03:00
|
|
|
return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-06-14 04:36:20 +03:00
|
|
|
static void io_free_page_table(void **table, size_t size)
|
2019-12-09 21:22:50 +03:00
|
|
|
{
|
2021-06-14 04:36:20 +03:00
|
|
|
unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
|
2019-12-09 21:22:50 +03:00
|
|
|
|
2021-04-01 17:44:03 +03:00
|
|
|
for (i = 0; i < nr_tables; i++)
|
2021-06-14 04:36:20 +03:00
|
|
|
kfree(table[i]);
|
|
|
|
kfree(table);
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void **io_alloc_page_table(size_t size)
|
2021-06-14 04:36:20 +03:00
|
|
|
{
|
|
|
|
unsigned i, nr_tables = DIV_ROUND_UP(size, PAGE_SIZE);
|
|
|
|
size_t init_size = size;
|
|
|
|
void **table;
|
|
|
|
|
2021-08-20 12:36:36 +03:00
|
|
|
table = kcalloc(nr_tables, sizeof(*table), GFP_KERNEL_ACCOUNT);
|
2021-06-14 04:36:20 +03:00
|
|
|
if (!table)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_tables; i++) {
|
2021-06-15 15:20:13 +03:00
|
|
|
unsigned int this_size = min_t(size_t, size, PAGE_SIZE);
|
2021-06-14 04:36:20 +03:00
|
|
|
|
2021-08-20 12:36:36 +03:00
|
|
|
table[i] = kzalloc(this_size, GFP_KERNEL_ACCOUNT);
|
2021-06-14 04:36:20 +03:00
|
|
|
if (!table[i]) {
|
|
|
|
io_free_page_table(table, init_size);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
size -= this_size;
|
|
|
|
}
|
|
|
|
return table;
|
2019-12-09 21:22:50 +03:00
|
|
|
}
|
|
|
|
|
2021-04-01 17:43:47 +03:00
|
|
|
static void io_rsrc_node_destroy(struct io_rsrc_node *ref_node)
|
2020-12-31 00:34:14 +03:00
|
|
|
{
|
2021-04-01 17:43:47 +03:00
|
|
|
percpu_ref_exit(&ref_node->refs);
|
|
|
|
kfree(ref_node);
|
2020-12-31 00:34:14 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_rsrc_node_ref_zero(struct percpu_ref *ref)
|
2021-08-09 18:09:47 +03:00
|
|
|
{
|
|
|
|
struct io_rsrc_node *node = container_of(ref, struct io_rsrc_node, refs);
|
|
|
|
struct io_ring_ctx *ctx = node->rsrc_data->ctx;
|
|
|
|
unsigned long flags;
|
|
|
|
bool first_add = false;
|
2022-01-21 15:38:56 +03:00
|
|
|
unsigned long delay = HZ;
|
2021-08-09 18:09:47 +03:00
|
|
|
|
|
|
|
spin_lock_irqsave(&ctx->rsrc_ref_lock, flags);
|
|
|
|
node->done = true;
|
|
|
|
|
2022-01-21 15:38:56 +03:00
|
|
|
/* if we are mid-quiesce then do not delay */
|
|
|
|
if (node->rsrc_data->quiesce)
|
|
|
|
delay = 0;
|
|
|
|
|
2021-08-09 18:09:47 +03:00
|
|
|
while (!list_empty(&ctx->rsrc_ref_list)) {
|
|
|
|
node = list_first_entry(&ctx->rsrc_ref_list,
|
|
|
|
struct io_rsrc_node, node);
|
|
|
|
/* recycle ref nodes in order */
|
|
|
|
if (!node->done)
|
|
|
|
break;
|
|
|
|
list_del(&node->node);
|
|
|
|
first_add |= llist_add(&node->llist, &ctx->rsrc_put_llist);
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&ctx->rsrc_ref_lock, flags);
|
|
|
|
|
|
|
|
if (first_add)
|
2022-01-21 15:38:56 +03:00
|
|
|
mod_delayed_work(system_wq, &ctx->rsrc_put_work, delay);
|
2021-08-09 18:09:47 +03:00
|
|
|
}
|
|
|
|
|
2022-01-27 17:04:44 +03:00
|
|
|
static struct io_rsrc_node *io_rsrc_node_alloc(void)
|
2021-08-09 18:09:47 +03:00
|
|
|
{
|
|
|
|
struct io_rsrc_node *ref_node;
|
|
|
|
|
|
|
|
ref_node = kzalloc(sizeof(*ref_node), GFP_KERNEL);
|
|
|
|
if (!ref_node)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
if (percpu_ref_init(&ref_node->refs, io_rsrc_node_ref_zero,
|
|
|
|
0, GFP_KERNEL)) {
|
|
|
|
kfree(ref_node);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
INIT_LIST_HEAD(&ref_node->node);
|
|
|
|
INIT_LIST_HEAD(&ref_node->rsrc_list);
|
|
|
|
ref_node->done = false;
|
|
|
|
return ref_node;
|
|
|
|
}
|
|
|
|
|
2022-05-25 06:54:43 +03:00
|
|
|
void io_rsrc_node_switch(struct io_ring_ctx *ctx,
|
|
|
|
struct io_rsrc_data *data_to_kill)
|
2021-10-10 01:14:41 +03:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2019-01-11 08:13:58 +03:00
|
|
|
{
|
2021-04-01 17:43:46 +03:00
|
|
|
WARN_ON_ONCE(!ctx->rsrc_backup_node);
|
|
|
|
WARN_ON_ONCE(data_to_kill && !ctx->rsrc_node);
|
2019-01-11 08:13:58 +03:00
|
|
|
|
2021-10-10 01:14:41 +03:00
|
|
|
io_rsrc_refs_drop(ctx);
|
|
|
|
|
2021-04-01 17:43:46 +03:00
|
|
|
if (data_to_kill) {
|
|
|
|
struct io_rsrc_node *rsrc_node = ctx->rsrc_node;
|
2021-04-01 17:43:43 +03:00
|
|
|
|
2021-04-01 17:43:46 +03:00
|
|
|
rsrc_node->rsrc_data = data_to_kill;
|
2021-08-09 16:49:41 +03:00
|
|
|
spin_lock_irq(&ctx->rsrc_ref_lock);
|
2021-04-01 17:43:46 +03:00
|
|
|
list_add_tail(&rsrc_node->node, &ctx->rsrc_ref_list);
|
2021-08-09 16:49:41 +03:00
|
|
|
spin_unlock_irq(&ctx->rsrc_ref_lock);
|
2021-04-01 17:43:43 +03:00
|
|
|
|
2021-04-11 03:46:34 +03:00
|
|
|
atomic_inc(&data_to_kill->refs);
|
2021-04-01 17:43:46 +03:00
|
|
|
percpu_ref_kill(&rsrc_node->refs);
|
|
|
|
ctx->rsrc_node = NULL;
|
|
|
|
}
|
2019-01-11 08:13:58 +03:00
|
|
|
|
2021-04-01 17:43:46 +03:00
|
|
|
if (!ctx->rsrc_node) {
|
|
|
|
ctx->rsrc_node = ctx->rsrc_backup_node;
|
|
|
|
ctx->rsrc_backup_node = NULL;
|
|
|
|
}
|
2021-02-19 12:19:36 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 06:54:43 +03:00
|
|
|
int io_rsrc_node_switch_start(struct io_ring_ctx *ctx)
|
2021-03-19 20:22:36 +03:00
|
|
|
{
|
|
|
|
if (ctx->rsrc_backup_node)
|
|
|
|
return 0;
|
2022-01-27 17:04:44 +03:00
|
|
|
ctx->rsrc_backup_node = io_rsrc_node_alloc();
|
2021-03-19 20:22:36 +03:00
|
|
|
return ctx->rsrc_backup_node ? 0 : -ENOMEM;
|
2021-02-19 12:19:36 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_rsrc_ref_quiesce(struct io_rsrc_data *data,
|
|
|
|
struct io_ring_ctx *ctx)
|
2021-02-19 12:19:36 +03:00
|
|
|
{
|
|
|
|
int ret;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
|
2021-04-01 17:43:48 +03:00
|
|
|
/* As we may drop ->uring_lock, other task may have started quiesce */
|
2021-02-19 12:19:36 +03:00
|
|
|
if (data->quiesce)
|
|
|
|
return -ENXIO;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
|
2021-02-19 12:19:36 +03:00
|
|
|
data->quiesce = true;
|
2020-12-31 00:34:15 +03:00
|
|
|
do {
|
2021-04-01 17:43:46 +03:00
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
2021-03-19 20:22:36 +03:00
|
|
|
if (ret)
|
2021-02-20 21:03:49 +03:00
|
|
|
break;
|
2021-04-01 17:43:46 +03:00
|
|
|
io_rsrc_node_switch(ctx, data);
|
2021-02-20 21:03:49 +03:00
|
|
|
|
2021-04-11 03:46:34 +03:00
|
|
|
/* kill initial ref, already quiesced if zero */
|
|
|
|
if (atomic_dec_and_test(&data->refs))
|
|
|
|
break;
|
2021-08-09 17:15:50 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-02-19 12:19:36 +03:00
|
|
|
flush_delayed_work(&ctx->rsrc_put_work);
|
2020-12-31 00:34:15 +03:00
|
|
|
ret = wait_for_completion_interruptible(&data->done);
|
2021-08-09 17:15:50 +03:00
|
|
|
if (!ret) {
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2022-02-22 19:17:51 +03:00
|
|
|
if (atomic_read(&data->refs) > 0) {
|
|
|
|
/*
|
|
|
|
* it has been revived by another thread while
|
|
|
|
* we were unlocked
|
|
|
|
*/
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
} else {
|
|
|
|
break;
|
|
|
|
}
|
2021-08-09 17:15:50 +03:00
|
|
|
}
|
2021-02-19 12:19:36 +03:00
|
|
|
|
2021-04-11 03:46:34 +03:00
|
|
|
atomic_inc(&data->refs);
|
|
|
|
/* wait for all works potentially completing data->done */
|
|
|
|
flush_delayed_work(&ctx->rsrc_put_work);
|
2021-02-25 17:37:35 +03:00
|
|
|
reinit_completion(&data->done);
|
2021-03-19 20:22:36 +03:00
|
|
|
|
2020-12-31 00:34:15 +03:00
|
|
|
ret = io_run_task_work_sig();
|
2021-02-19 12:19:36 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-02-20 21:03:49 +03:00
|
|
|
} while (ret >= 0);
|
2021-02-19 12:19:36 +03:00
|
|
|
data->quiesce = false;
|
2019-12-09 21:22:50 +03:00
|
|
|
|
2021-02-19 12:19:36 +03:00
|
|
|
return ret;
|
2021-01-15 20:37:50 +03:00
|
|
|
}
|
|
|
|
|
2021-06-14 04:36:21 +03:00
|
|
|
static u64 *io_get_tag_slot(struct io_rsrc_data *data, unsigned int idx)
|
|
|
|
{
|
|
|
|
unsigned int off = idx & IO_RSRC_TAG_TABLE_MASK;
|
|
|
|
unsigned int table_idx = idx >> IO_RSRC_TAG_TABLE_SHIFT;
|
|
|
|
|
|
|
|
return &data->tags[table_idx][off];
|
|
|
|
}
|
|
|
|
|
2021-04-25 16:32:16 +03:00
|
|
|
static void io_rsrc_data_free(struct io_rsrc_data *data)
|
2021-01-15 20:37:51 +03:00
|
|
|
{
|
2021-06-14 04:36:21 +03:00
|
|
|
size_t size = data->nr * sizeof(data->tags[0][0]);
|
|
|
|
|
|
|
|
if (data->tags)
|
|
|
|
io_free_page_table((void **)data->tags, size);
|
2021-04-25 16:32:16 +03:00
|
|
|
kfree(data);
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_rsrc_data_alloc(struct io_ring_ctx *ctx, rsrc_put_fn *do_put,
|
|
|
|
u64 __user *utags, unsigned nr,
|
|
|
|
struct io_rsrc_data **pdata)
|
2021-01-15 20:37:51 +03:00
|
|
|
{
|
2021-04-01 17:43:40 +03:00
|
|
|
struct io_rsrc_data *data;
|
2021-06-14 04:36:21 +03:00
|
|
|
int ret = -ENOMEM;
|
2021-06-14 04:36:18 +03:00
|
|
|
unsigned i;
|
2021-01-15 20:37:51 +03:00
|
|
|
|
|
|
|
data = kzalloc(sizeof(*data), GFP_KERNEL);
|
|
|
|
if (!data)
|
2021-06-14 04:36:18 +03:00
|
|
|
return -ENOMEM;
|
2021-06-14 04:36:21 +03:00
|
|
|
data->tags = (u64 **)io_alloc_page_table(nr * sizeof(data->tags[0][0]));
|
2021-04-25 16:32:18 +03:00
|
|
|
if (!data->tags) {
|
2021-01-15 20:37:51 +03:00
|
|
|
kfree(data);
|
2021-06-14 04:36:18 +03:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2021-06-14 04:36:21 +03:00
|
|
|
|
|
|
|
data->nr = nr;
|
|
|
|
data->ctx = ctx;
|
|
|
|
data->do_put = do_put;
|
2021-06-14 04:36:18 +03:00
|
|
|
if (utags) {
|
2021-06-14 04:36:21 +03:00
|
|
|
ret = -EFAULT;
|
2021-06-14 04:36:18 +03:00
|
|
|
for (i = 0; i < nr; i++) {
|
2021-06-15 16:00:11 +03:00
|
|
|
u64 *tag_slot = io_get_tag_slot(data, i);
|
|
|
|
|
|
|
|
if (copy_from_user(tag_slot, &utags[i],
|
|
|
|
sizeof(*tag_slot)))
|
2021-06-14 04:36:21 +03:00
|
|
|
goto fail;
|
2021-06-14 04:36:18 +03:00
|
|
|
}
|
2021-01-15 20:37:51 +03:00
|
|
|
}
|
2021-04-25 16:32:18 +03:00
|
|
|
|
2021-04-11 03:46:34 +03:00
|
|
|
atomic_set(&data->refs, 1);
|
2021-01-15 20:37:51 +03:00
|
|
|
init_completion(&data->done);
|
2021-06-14 04:36:18 +03:00
|
|
|
*pdata = data;
|
|
|
|
return 0;
|
2021-06-14 04:36:21 +03:00
|
|
|
fail:
|
|
|
|
io_rsrc_data_free(data);
|
|
|
|
return ret;
|
2021-01-15 20:37:51 +03:00
|
|
|
}
|
|
|
|
|
2021-04-25 16:32:15 +03:00
|
|
|
static void __io_sqe_files_unregister(struct io_ring_ctx *ctx)
|
2021-01-15 20:37:51 +03:00
|
|
|
{
|
2022-04-26 01:43:45 +03:00
|
|
|
#if !defined(IO_URING_SCM_ALL)
|
2022-04-06 23:33:56 +03:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ctx->nr_user_files; i++) {
|
2022-05-25 19:40:19 +03:00
|
|
|
struct file *file = io_file_from_index(&ctx->file_table, i);
|
2022-04-06 23:33:56 +03:00
|
|
|
|
2022-04-21 01:15:27 +03:00
|
|
|
if (!file)
|
|
|
|
continue;
|
|
|
|
if (io_fixed_file_slot(&ctx->file_table, i)->file_ptr & FFS_SCM)
|
2022-04-06 23:33:56 +03:00
|
|
|
continue;
|
2022-05-07 18:56:13 +03:00
|
|
|
io_file_bitmap_clear(&ctx->file_table, i);
|
2022-04-06 23:33:56 +03:00
|
|
|
fput(file);
|
|
|
|
}
|
2022-04-21 01:15:27 +03:00
|
|
|
#endif
|
2022-04-06 23:33:56 +03:00
|
|
|
|
2021-04-25 16:32:15 +03:00
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
if (ctx->ring_sock) {
|
|
|
|
struct sock *sock = ctx->ring_sock->sk;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
while ((skb = skb_dequeue(&sock->sk_receive_queue)) != NULL)
|
|
|
|
kfree_skb(skb);
|
|
|
|
}
|
|
|
|
#endif
|
2021-08-09 15:04:01 +03:00
|
|
|
io_free_file_tables(&ctx->file_table);
|
2021-04-25 16:32:16 +03:00
|
|
|
io_rsrc_data_free(ctx->file_data);
|
2021-04-25 16:32:15 +03:00
|
|
|
ctx->file_data = NULL;
|
|
|
|
ctx->nr_user_files = 0;
|
2021-01-15 20:37:51 +03:00
|
|
|
}
|
|
|
|
|
2021-01-15 20:37:50 +03:00
|
|
|
static int io_sqe_files_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2022-06-13 08:32:44 +03:00
|
|
|
unsigned nr = ctx->nr_user_files;
|
2021-01-15 20:37:50 +03:00
|
|
|
int ret;
|
|
|
|
|
2021-04-13 04:58:38 +03:00
|
|
|
if (!ctx->file_data)
|
2021-01-15 20:37:50 +03:00
|
|
|
return -ENXIO;
|
2022-06-13 08:32:44 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Quiesce may unlock ->uring_lock, and while it's not held
|
|
|
|
* prevent new requests using the table.
|
|
|
|
*/
|
|
|
|
ctx->nr_user_files = 0;
|
2021-04-13 04:58:38 +03:00
|
|
|
ret = io_rsrc_ref_quiesce(ctx->file_data, ctx);
|
2022-06-13 08:32:44 +03:00
|
|
|
ctx->nr_user_files = nr;
|
2021-04-13 04:58:38 +03:00
|
|
|
if (!ret)
|
|
|
|
__io_sqe_files_unregister(ctx);
|
|
|
|
return ret;
|
2019-01-11 08:13:58 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure the UNIX gc is aware of our file set, so we are certain that
|
|
|
|
* the io_uring can be safely unregistered on process exit, even if we have
|
2022-04-06 23:33:56 +03:00
|
|
|
* loops in the file referencing. We account only files that can hold other
|
|
|
|
* files because otherwise they can't form a loop and so are not interesting
|
|
|
|
* for GC.
|
2019-01-11 08:13:58 +03:00
|
|
|
*/
|
2022-04-07 15:40:05 +03:00
|
|
|
static int io_scm_file_account(struct io_ring_ctx *ctx, struct file *file)
|
2019-01-11 08:13:58 +03:00
|
|
|
{
|
2022-04-07 15:40:04 +03:00
|
|
|
#if defined(CONFIG_UNIX)
|
2019-01-11 08:13:58 +03:00
|
|
|
struct sock *sk = ctx->ring_sock->sk;
|
2022-04-07 15:40:04 +03:00
|
|
|
struct sk_buff_head *head = &sk->sk_receive_queue;
|
2019-01-11 08:13:58 +03:00
|
|
|
struct scm_fp_list *fpl;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
2022-04-07 15:40:04 +03:00
|
|
|
if (likely(!io_file_need_scm(file)))
|
|
|
|
return 0;
|
2019-01-11 08:13:58 +03:00
|
|
|
|
2022-04-07 15:40:04 +03:00
|
|
|
/*
|
|
|
|
* See if we can merge this file into an existing skb SCM_RIGHTS
|
|
|
|
* file set. If there's no room, fall back to allocating a new skb
|
|
|
|
* and filling it in.
|
|
|
|
*/
|
|
|
|
spin_lock_irq(&head->lock);
|
|
|
|
skb = skb_peek(head);
|
|
|
|
if (skb && UNIXCB(skb).fp->count < SCM_MAX_FD)
|
|
|
|
__skb_unlink(skb, head);
|
|
|
|
else
|
|
|
|
skb = NULL;
|
|
|
|
spin_unlock_irq(&head->lock);
|
2019-01-11 08:13:58 +03:00
|
|
|
|
|
|
|
if (!skb) {
|
2022-04-07 15:40:04 +03:00
|
|
|
fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
|
|
|
|
if (!fpl)
|
|
|
|
return -ENOMEM;
|
2019-01-11 08:13:58 +03:00
|
|
|
|
2022-04-07 15:40:04 +03:00
|
|
|
skb = alloc_skb(0, GFP_KERNEL);
|
|
|
|
if (!skb) {
|
|
|
|
kfree(fpl);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2019-01-11 08:13:58 +03:00
|
|
|
|
2022-04-07 15:40:04 +03:00
|
|
|
fpl->user = get_uid(current_user());
|
|
|
|
fpl->max = SCM_MAX_FD;
|
|
|
|
fpl->count = 0;
|
2019-10-26 16:20:21 +03:00
|
|
|
|
2022-04-07 15:40:04 +03:00
|
|
|
UNIXCB(skb).fp = fpl;
|
|
|
|
skb->sk = sk;
|
|
|
|
skb->destructor = unix_destruct_scm;
|
|
|
|
refcount_add(skb->truesize, &sk->sk_wmem_alloc);
|
2019-01-11 08:13:58 +03:00
|
|
|
}
|
|
|
|
|
2022-04-07 15:40:04 +03:00
|
|
|
fpl = UNIXCB(skb).fp;
|
|
|
|
fpl->fp[fpl->count++] = get_file(file);
|
|
|
|
unix_inflight(fpl->user, file);
|
|
|
|
skb_queue_head(head, skb);
|
2022-04-07 15:40:02 +03:00
|
|
|
fput(file);
|
2022-04-07 15:40:04 +03:00
|
|
|
#endif
|
2019-01-11 08:13:58 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-04-01 17:43:56 +03:00
|
|
|
static void io_rsrc_file_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
|
2019-12-09 21:22:50 +03:00
|
|
|
{
|
2021-01-15 20:37:45 +03:00
|
|
|
struct file *file = prsrc->file;
|
2019-12-09 21:22:50 +03:00
|
|
|
#if defined(CONFIG_UNIX)
|
|
|
|
struct sock *sock = ctx->ring_sock->sk;
|
|
|
|
struct sk_buff_head list, *head = &sock->sk_receive_queue;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
int i;
|
|
|
|
|
2022-04-06 23:33:56 +03:00
|
|
|
if (!io_file_need_scm(file)) {
|
|
|
|
fput(file);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-12-09 21:22:50 +03:00
|
|
|
__skb_queue_head_init(&list);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find the skb that holds this file in its SCM_RIGHTS. When found,
|
|
|
|
* remove this entry and rearrange the file array.
|
|
|
|
*/
|
|
|
|
skb = skb_dequeue(head);
|
|
|
|
while (skb) {
|
|
|
|
struct scm_fp_list *fp;
|
|
|
|
|
|
|
|
fp = UNIXCB(skb).fp;
|
|
|
|
for (i = 0; i < fp->count; i++) {
|
|
|
|
int left;
|
|
|
|
|
|
|
|
if (fp->fp[i] != file)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
unix_notinflight(fp->user, fp->fp[i]);
|
|
|
|
left = fp->count - 1 - i;
|
|
|
|
if (left) {
|
|
|
|
memmove(&fp->fp[i], &fp->fp[i + 1],
|
|
|
|
left * sizeof(struct file *));
|
|
|
|
}
|
|
|
|
fp->count--;
|
|
|
|
if (!fp->count) {
|
|
|
|
kfree_skb(skb);
|
|
|
|
skb = NULL;
|
|
|
|
} else {
|
|
|
|
__skb_queue_tail(&list, skb);
|
|
|
|
}
|
|
|
|
fput(file);
|
|
|
|
file = NULL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!file)
|
|
|
|
break;
|
|
|
|
|
|
|
|
__skb_queue_tail(&list, skb);
|
|
|
|
|
|
|
|
skb = skb_dequeue(head);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (skb_peek(&list)) {
|
|
|
|
spin_lock_irq(&head->lock);
|
|
|
|
while ((skb = __skb_dequeue(&list)) != NULL)
|
|
|
|
__skb_queue_tail(head, skb);
|
|
|
|
spin_unlock_irq(&head->lock);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
fput(file);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2021-04-01 17:43:40 +03:00
|
|
|
static void __io_rsrc_put_work(struct io_rsrc_node *ref_node)
|
2019-10-26 16:20:21 +03:00
|
|
|
{
|
2021-04-01 17:43:40 +03:00
|
|
|
struct io_rsrc_data *rsrc_data = ref_node->rsrc_data;
|
2021-01-15 20:37:44 +03:00
|
|
|
struct io_ring_ctx *ctx = rsrc_data->ctx;
|
|
|
|
struct io_rsrc_put *prsrc, *tmp;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
|
2021-01-15 20:37:44 +03:00
|
|
|
list_for_each_entry_safe(prsrc, tmp, &ref_node->rsrc_list, list) {
|
|
|
|
list_del(&prsrc->list);
|
2021-04-25 16:32:18 +03:00
|
|
|
|
|
|
|
if (prsrc->tag) {
|
2022-03-25 14:52:14 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL)
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-04-25 16:32:18 +03:00
|
|
|
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-11-10 18:49:31 +03:00
|
|
|
io_fill_cqe_aux(ctx, prsrc->tag, 0, 0);
|
2021-04-25 16:32:18 +03:00
|
|
|
io_commit_cqring(ctx);
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-04-25 16:32:18 +03:00
|
|
|
io_cqring_ev_posted(ctx);
|
2022-03-25 14:52:14 +03:00
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL)
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-04-25 16:32:18 +03:00
|
|
|
}
|
|
|
|
|
2021-04-01 17:43:44 +03:00
|
|
|
rsrc_data->do_put(ctx, prsrc);
|
2021-01-15 20:37:44 +03:00
|
|
|
kfree(prsrc);
|
2019-10-26 16:20:21 +03:00
|
|
|
}
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
|
2021-04-01 17:43:47 +03:00
|
|
|
io_rsrc_node_destroy(ref_node);
|
2021-04-11 03:46:34 +03:00
|
|
|
if (atomic_dec_and_test(&rsrc_data->refs))
|
|
|
|
complete(&rsrc_data->done);
|
2020-02-05 05:54:55 +03:00
|
|
|
}
|
2019-10-26 16:20:21 +03:00
|
|
|
|
2021-01-15 20:37:44 +03:00
|
|
|
static void io_rsrc_put_work(struct work_struct *work)
|
2020-05-15 02:21:15 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
struct llist_node *node;
|
|
|
|
|
2021-01-15 20:37:44 +03:00
|
|
|
ctx = container_of(work, struct io_ring_ctx, rsrc_put_work.work);
|
|
|
|
node = llist_del_all(&ctx->rsrc_put_llist);
|
2020-05-15 02:21:15 +03:00
|
|
|
|
|
|
|
while (node) {
|
2021-04-01 17:43:40 +03:00
|
|
|
struct io_rsrc_node *ref_node;
|
2020-05-15 02:21:15 +03:00
|
|
|
struct llist_node *next = node->next;
|
|
|
|
|
2021-04-01 17:43:40 +03:00
|
|
|
ref_node = llist_entry(node, struct io_rsrc_node, llist);
|
2021-01-15 20:37:44 +03:00
|
|
|
__io_rsrc_put_work(ref_node);
|
2020-05-15 02:21:15 +03:00
|
|
|
node = next;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-01-11 08:13:58 +03:00
|
|
|
static int io_sqe_files_register(struct io_ring_ctx *ctx, void __user *arg,
|
2021-04-25 16:32:21 +03:00
|
|
|
unsigned nr_args, u64 __user *tags)
|
2019-01-11 08:13:58 +03:00
|
|
|
{
|
|
|
|
__s32 __user *fds = (__s32 __user *) arg;
|
2019-12-09 21:22:50 +03:00
|
|
|
struct file *file;
|
2021-04-01 17:43:42 +03:00
|
|
|
int fd, ret;
|
2021-04-01 17:44:03 +03:00
|
|
|
unsigned i;
|
2019-01-11 08:13:58 +03:00
|
|
|
|
2019-12-09 21:22:50 +03:00
|
|
|
if (ctx->file_data)
|
2019-01-11 08:13:58 +03:00
|
|
|
return -EBUSY;
|
|
|
|
if (!nr_args)
|
|
|
|
return -EINVAL;
|
|
|
|
if (nr_args > IORING_MAX_FIXED_FILES)
|
|
|
|
return -EMFILE;
|
2021-08-20 12:36:35 +03:00
|
|
|
if (nr_args > rlimit(RLIMIT_NOFILE))
|
|
|
|
return -EMFILE;
|
2021-04-01 17:43:46 +03:00
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
2021-04-01 17:43:42 +03:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-06-14 04:36:18 +03:00
|
|
|
ret = io_rsrc_data_alloc(ctx, io_rsrc_file_put, tags, nr_args,
|
|
|
|
&ctx->file_data);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2019-01-11 08:13:58 +03:00
|
|
|
|
2022-04-07 15:40:01 +03:00
|
|
|
if (!io_alloc_file_tables(&ctx->file_table, nr_args)) {
|
|
|
|
io_rsrc_data_free(ctx->file_data);
|
|
|
|
ctx->file_data = NULL;
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2019-10-26 16:20:21 +03:00
|
|
|
|
2019-10-03 17:11:03 +03:00
|
|
|
for (i = 0; i < nr_args; i++, ctx->nr_user_files++) {
|
2022-04-07 15:40:01 +03:00
|
|
|
struct io_fixed_file *file_slot;
|
|
|
|
|
2022-05-09 18:29:14 +03:00
|
|
|
if (fds && copy_from_user(&fd, &fds[i], sizeof(fd))) {
|
2020-10-10 20:34:15 +03:00
|
|
|
ret = -EFAULT;
|
2022-04-07 15:40:01 +03:00
|
|
|
goto fail;
|
2020-10-10 20:34:15 +03:00
|
|
|
}
|
2019-10-03 17:11:03 +03:00
|
|
|
/* allow sparse sets */
|
2022-05-09 18:29:14 +03:00
|
|
|
if (!fds || fd == -1) {
|
2021-04-25 16:32:21 +03:00
|
|
|
ret = -EINVAL;
|
2021-06-14 04:36:21 +03:00
|
|
|
if (unlikely(*io_get_tag_slot(ctx->file_data, i)))
|
2022-04-07 15:40:01 +03:00
|
|
|
goto fail;
|
2019-10-03 17:11:03 +03:00
|
|
|
continue;
|
2021-04-25 16:32:21 +03:00
|
|
|
}
|
2019-01-11 08:13:58 +03:00
|
|
|
|
2019-12-09 21:22:50 +03:00
|
|
|
file = fget(fd);
|
2019-01-11 08:13:58 +03:00
|
|
|
ret = -EBADF;
|
2021-04-25 16:32:21 +03:00
|
|
|
if (unlikely(!file))
|
2022-04-07 15:40:01 +03:00
|
|
|
goto fail;
|
2019-12-09 21:22:50 +03:00
|
|
|
|
2019-01-11 08:13:58 +03:00
|
|
|
/*
|
|
|
|
* Don't allow io_uring instances to be registered. If UNIX
|
|
|
|
* isn't enabled, then this causes a reference cycle and this
|
|
|
|
* instance can never get freed. If UNIX is enabled we'll
|
|
|
|
* handle it just fine, but there's still no point in allowing
|
|
|
|
* a ring fd as it doesn't support regular read/write anyway.
|
|
|
|
*/
|
2022-05-25 19:28:04 +03:00
|
|
|
if (io_is_uring_fops(file)) {
|
2019-12-09 21:22:50 +03:00
|
|
|
fput(file);
|
2022-04-07 15:40:01 +03:00
|
|
|
goto fail;
|
2019-01-11 08:13:58 +03:00
|
|
|
}
|
2022-04-07 15:40:05 +03:00
|
|
|
ret = io_scm_file_account(ctx, file);
|
2022-04-07 15:40:01 +03:00
|
|
|
if (ret) {
|
2020-10-10 20:34:15 +03:00
|
|
|
fput(file);
|
2022-04-07 15:40:01 +03:00
|
|
|
goto fail;
|
2019-10-03 22:59:56 +03:00
|
|
|
}
|
2022-04-07 15:40:03 +03:00
|
|
|
file_slot = io_fixed_file_slot(&ctx->file_table, i);
|
|
|
|
io_fixed_file_set(file_slot, file);
|
2022-05-07 18:56:13 +03:00
|
|
|
io_file_bitmap_set(&ctx->file_table, i);
|
2019-10-03 22:59:56 +03:00
|
|
|
}
|
|
|
|
|
2021-04-01 17:43:46 +03:00
|
|
|
io_rsrc_node_switch(ctx, NULL);
|
2019-10-03 22:59:56 +03:00
|
|
|
return 0;
|
2022-04-07 15:40:01 +03:00
|
|
|
fail:
|
|
|
|
__io_sqe_files_unregister(ctx);
|
2019-01-11 08:13:58 +03:00
|
|
|
return ret;
|
2019-10-03 22:59:56 +03:00
|
|
|
}
|
|
|
|
|
2022-05-25 06:54:43 +03:00
|
|
|
int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
|
|
|
|
struct io_rsrc_node *node, void *rsrc)
|
2021-09-14 18:12:52 +03:00
|
|
|
{
|
2022-04-07 16:05:04 +03:00
|
|
|
u64 *tag_slot = io_get_tag_slot(data, idx);
|
2021-09-14 18:12:52 +03:00
|
|
|
struct io_rsrc_put *prsrc;
|
|
|
|
|
|
|
|
prsrc = kzalloc(sizeof(*prsrc), GFP_KERNEL);
|
|
|
|
if (!prsrc)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2022-04-07 16:05:04 +03:00
|
|
|
prsrc->tag = *tag_slot;
|
|
|
|
*tag_slot = 0;
|
2021-09-14 18:12:52 +03:00
|
|
|
prsrc->rsrc = rsrc;
|
|
|
|
list_add(&prsrc->list, &node->rsrc_list);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-05-25 15:25:13 +03:00
|
|
|
int io_install_fixed_file(struct io_kiocb *req, struct file *file,
|
|
|
|
unsigned int issue_flags, u32 slot_index)
|
2022-06-01 17:28:44 +03:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
2021-08-25 14:25:45 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-09-14 18:12:52 +03:00
|
|
|
bool needs_switch = false;
|
2021-08-25 14:25:45 +03:00
|
|
|
struct io_fixed_file *file_slot;
|
2022-06-01 17:28:44 +03:00
|
|
|
int ret;
|
2021-08-25 14:25:45 +03:00
|
|
|
|
2022-05-25 19:28:04 +03:00
|
|
|
if (io_is_uring_fops(file))
|
2022-06-01 17:28:44 +03:00
|
|
|
return -EBADF;
|
2021-08-25 14:25:45 +03:00
|
|
|
if (!ctx->file_data)
|
2022-06-01 17:28:44 +03:00
|
|
|
return -ENXIO;
|
2021-08-25 14:25:45 +03:00
|
|
|
if (slot_index >= ctx->nr_user_files)
|
2022-06-01 17:28:44 +03:00
|
|
|
return -EINVAL;
|
2021-08-25 14:25:45 +03:00
|
|
|
|
|
|
|
slot_index = array_index_nospec(slot_index, ctx->nr_user_files);
|
|
|
|
file_slot = io_fixed_file_slot(&ctx->file_table, slot_index);
|
2021-09-14 18:12:52 +03:00
|
|
|
|
|
|
|
if (file_slot->file_ptr) {
|
|
|
|
struct file *old_file;
|
|
|
|
|
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
old_file = (struct file *)(file_slot->file_ptr & FFS_MASK);
|
|
|
|
ret = io_queue_rsrc_removal(ctx->file_data, slot_index,
|
|
|
|
ctx->rsrc_node, old_file);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
file_slot->file_ptr = 0;
|
2022-05-07 18:56:13 +03:00
|
|
|
io_file_bitmap_clear(&ctx->file_table, slot_index);
|
2021-09-14 18:12:52 +03:00
|
|
|
needs_switch = true;
|
|
|
|
}
|
2021-08-25 14:25:45 +03:00
|
|
|
|
2022-04-07 15:40:05 +03:00
|
|
|
ret = io_scm_file_account(ctx, file);
|
2022-04-07 15:40:03 +03:00
|
|
|
if (!ret) {
|
|
|
|
*io_get_tag_slot(ctx->file_data, slot_index) = 0;
|
|
|
|
io_fixed_file_set(file_slot, file);
|
2022-05-07 18:56:13 +03:00
|
|
|
io_file_bitmap_set(&ctx->file_table, slot_index);
|
2021-08-25 14:25:45 +03:00
|
|
|
}
|
|
|
|
err:
|
2021-09-14 18:12:52 +03:00
|
|
|
if (needs_switch)
|
|
|
|
io_rsrc_node_switch(ctx, ctx->file_data);
|
2021-08-25 14:25:45 +03:00
|
|
|
if (ret)
|
|
|
|
fput(file);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-12-09 21:22:50 +03:00
|
|
|
static int __io_sqe_files_update(struct io_ring_ctx *ctx,
|
2021-04-25 16:32:22 +03:00
|
|
|
struct io_uring_rsrc_update2 *up,
|
2019-12-09 21:22:50 +03:00
|
|
|
unsigned nr_args)
|
|
|
|
{
|
2021-04-25 16:32:22 +03:00
|
|
|
u64 __user *tags = u64_to_user_ptr(up->tags);
|
2021-04-25 16:32:19 +03:00
|
|
|
__s32 __user *fds = u64_to_user_ptr(up->data);
|
2021-04-01 17:43:40 +03:00
|
|
|
struct io_rsrc_data *data = ctx->file_data;
|
2021-04-01 17:44:04 +03:00
|
|
|
struct io_fixed_file *file_slot;
|
|
|
|
struct file *file;
|
2021-04-25 16:32:19 +03:00
|
|
|
int fd, i, err = 0;
|
|
|
|
unsigned int done;
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
bool needs_switch = false;
|
2019-10-03 22:59:56 +03:00
|
|
|
|
2021-04-25 16:32:19 +03:00
|
|
|
if (!ctx->file_data)
|
|
|
|
return -ENXIO;
|
|
|
|
if (up->offset + nr_args > ctx->nr_user_files)
|
2019-10-03 22:59:56 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
2021-01-26 16:51:09 +03:00
|
|
|
for (done = 0; done < nr_args; done++) {
|
2021-04-25 16:32:22 +03:00
|
|
|
u64 tag = 0;
|
|
|
|
|
|
|
|
if ((tags && copy_from_user(&tag, &tags[done], sizeof(tag))) ||
|
|
|
|
copy_from_user(&fd, &fds[done], sizeof(fd))) {
|
2019-10-03 22:59:56 +03:00
|
|
|
err = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
2021-04-25 16:32:22 +03:00
|
|
|
if ((fd == IORING_REGISTER_FILES_SKIP || fd == -1) && tag) {
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
2021-01-26 23:23:28 +03:00
|
|
|
if (fd == IORING_REGISTER_FILES_SKIP)
|
|
|
|
continue;
|
|
|
|
|
2021-01-26 16:51:09 +03:00
|
|
|
i = array_index_nospec(up->offset + done, ctx->nr_user_files);
|
2021-04-11 03:46:37 +03:00
|
|
|
file_slot = io_fixed_file_slot(&ctx->file_table, i);
|
2021-02-04 16:52:07 +03:00
|
|
|
|
2021-04-01 17:44:04 +03:00
|
|
|
if (file_slot->file_ptr) {
|
|
|
|
file = (struct file *)(file_slot->file_ptr & FFS_MASK);
|
2022-04-07 16:05:05 +03:00
|
|
|
err = io_queue_rsrc_removal(data, i, ctx->rsrc_node, file);
|
2020-03-23 12:47:15 +03:00
|
|
|
if (err)
|
|
|
|
break;
|
2021-04-01 17:44:04 +03:00
|
|
|
file_slot->file_ptr = 0;
|
2022-05-07 18:56:13 +03:00
|
|
|
io_file_bitmap_clear(&ctx->file_table, i);
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
needs_switch = true;
|
2019-10-03 22:59:56 +03:00
|
|
|
}
|
|
|
|
if (fd != -1) {
|
|
|
|
file = fget(fd);
|
|
|
|
if (!file) {
|
|
|
|
err = -EBADF;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Don't allow io_uring instances to be registered. If
|
|
|
|
* UNIX isn't enabled, then this causes a reference
|
|
|
|
* cycle and this instance can never get freed. If UNIX
|
|
|
|
* is enabled we'll handle it just fine, but there's
|
|
|
|
* still no point in allowing a ring fd as it doesn't
|
|
|
|
* support regular read/write anyway.
|
|
|
|
*/
|
2022-05-25 19:28:04 +03:00
|
|
|
if (io_is_uring_fops(file)) {
|
2019-10-03 22:59:56 +03:00
|
|
|
fput(file);
|
|
|
|
err = -EBADF;
|
|
|
|
break;
|
|
|
|
}
|
2022-04-07 15:40:05 +03:00
|
|
|
err = io_scm_file_account(ctx, file);
|
2020-07-09 13:11:41 +03:00
|
|
|
if (err) {
|
|
|
|
fput(file);
|
2019-10-03 22:59:56 +03:00
|
|
|
break;
|
2020-07-09 13:11:41 +03:00
|
|
|
}
|
2022-04-07 15:40:03 +03:00
|
|
|
*io_get_tag_slot(data, i) = tag;
|
|
|
|
io_fixed_file_set(file_slot, file);
|
2022-05-07 18:56:13 +03:00
|
|
|
io_file_bitmap_set(&ctx->file_table, i);
|
2019-10-03 22:59:56 +03:00
|
|
|
}
|
2019-12-09 21:22:50 +03:00
|
|
|
}
|
|
|
|
|
2021-04-01 17:43:46 +03:00
|
|
|
if (needs_switch)
|
|
|
|
io_rsrc_node_switch(ctx, data);
|
2019-10-03 22:59:56 +03:00
|
|
|
return done ? done : err;
|
|
|
|
}
|
io_uring: refactor file register/unregister/update handling
While diving into io_uring fileset register/unregister/update codes, we
found one bug in the fileset update handling. io_uring fileset update
use a percpu_ref variable to check whether we can put the previously
registered file, only when the refcnt of the perfcpu_ref variable
reaches zero, can we safely put these files. But this doesn't work so
well. If applications always issue requests continually, this
perfcpu_ref will never have an chance to reach zero, and it'll always be
in atomic mode, also will defeat the gains introduced by fileset
register/unresiger/update feature, which are used to reduce the atomic
operation overhead of fput/fget.
To fix this issue, while applications do IORING_REGISTER_FILES or
IORING_REGISTER_FILES_UPDATE operations, we allocate a new percpu_ref
and kill the old percpu_ref, new requests will use the new percpu_ref.
Once all previous old requests complete, old percpu_refs will be dropped
and registered files will be put safely.
Link: https://lore.kernel.org/io-uring/5a8dac33-4ca2-4847-b091-f7dcd3ad0ff3@linux.alibaba.com/T/#t
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-31 09:05:18 +03:00
|
|
|
|
2020-06-17 02:36:07 +03:00
|
|
|
static inline void __io_unaccount_mem(struct user_struct *user,
|
|
|
|
unsigned long nr_pages)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
atomic_long_sub(nr_pages, &user->locked_vm);
|
|
|
|
}
|
|
|
|
|
2020-06-17 02:36:07 +03:00
|
|
|
static inline int __io_account_mem(struct user_struct *user,
|
|
|
|
unsigned long nr_pages)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
unsigned long page_limit, cur_pages, new_pages;
|
|
|
|
|
|
|
|
/* Don't allow more pages than we can safely lock */
|
|
|
|
page_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
|
|
|
|
|
|
|
|
do {
|
|
|
|
cur_pages = atomic_long_read(&user->locked_vm);
|
|
|
|
new_pages = cur_pages + nr_pages;
|
|
|
|
if (new_pages > page_limit)
|
|
|
|
return -ENOMEM;
|
|
|
|
} while (atomic_long_cmpxchg(&user->locked_vm, cur_pages,
|
|
|
|
new_pages) != cur_pages);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-02-10 06:14:12 +03:00
|
|
|
static void io_unaccount_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
|
2020-06-17 02:36:07 +03:00
|
|
|
{
|
2021-02-22 02:19:37 +03:00
|
|
|
if (ctx->user)
|
2020-06-17 02:36:07 +03:00
|
|
|
__io_unaccount_mem(ctx->user, nr_pages);
|
2020-06-17 02:36:09 +03:00
|
|
|
|
2021-02-10 06:14:12 +03:00
|
|
|
if (ctx->mm_account)
|
|
|
|
atomic64_sub(nr_pages, &ctx->mm_account->pinned_vm);
|
2020-06-17 02:36:07 +03:00
|
|
|
}
|
|
|
|
|
2021-02-10 06:14:12 +03:00
|
|
|
static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages)
|
2020-06-17 02:36:07 +03:00
|
|
|
{
|
2020-06-17 02:36:09 +03:00
|
|
|
int ret;
|
|
|
|
|
2021-02-22 02:19:37 +03:00
|
|
|
if (ctx->user) {
|
2020-06-17 02:36:09 +03:00
|
|
|
ret = __io_account_mem(ctx->user, nr_pages);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-02-10 06:14:12 +03:00
|
|
|
if (ctx->mm_account)
|
|
|
|
atomic64_add(nr_pages, &ctx->mm_account->pinned_vm);
|
2020-06-17 02:36:07 +03:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
static void io_mem_free(void *ptr)
|
|
|
|
{
|
io_uring: free allocated io_memory once
If io_allocate_scq_urings() fails to allocate an sq_* region, it will
call io_mem_free() for any previously allocated regions, but leave
dangling pointers to these regions in the ctx. Any regions which have
not yet been allocated are left NULL. Note that when returning
-EOVERFLOW, the previously allocated sq_ring is not freed, which appears
to be an unintentional leak.
When io_allocate_scq_urings() fails, io_uring_create() will call
io_ring_ctx_wait_and_kill(), which calls io_mem_free() on all the sq_*
regions, assuming the pointers are valid and not NULL.
This can result in pages being freed multiple times, which has been
observed to corrupt the page state, leading to subsequent fun. This can
also result in virt_to_page() on NULL, resulting in the use of bogus
page addresses, and yet more subsequent fun. The latter can be detected
with CONFIG_DEBUG_VIRTUAL on arm64.
Adding a cleanup path to io_allocate_scq_urings() complicates the logic,
so let's leave it to io_ring_ctx_free() to consistently free these
pointers, and simplify the io_allocate_scq_urings() error paths.
Full splats from before this patch below. Note that the pointer logged
by the DEBUG_VIRTUAL "non-linear address" warning has been hashed, and
is actually NULL.
[ 26.098129] page:ffff80000e949a00 count:0 mapcount:-128 mapping:0000000000000000 index:0x0
[ 26.102976] flags: 0x63fffc000000()
[ 26.104373] raw: 000063fffc000000 ffff80000e86c188 ffff80000ea3df08 0000000000000000
[ 26.108917] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
[ 26.137235] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
[ 26.143960] ------------[ cut here ]------------
[ 26.146020] kernel BUG at include/linux/mm.h:547!
[ 26.147586] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 26.149163] Modules linked in:
[ 26.150287] Process syz-executor.21 (pid: 20204, stack limit = 0x000000000e9cefeb)
[ 26.153307] CPU: 2 PID: 20204 Comm: syz-executor.21 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #18
[ 26.156566] Hardware name: linux,dummy-virt (DT)
[ 26.158089] pstate: 40400005 (nZcv daif +PAN -UAO)
[ 26.159869] pc : io_mem_free+0x9c/0xa8
[ 26.161436] lr : io_mem_free+0x9c/0xa8
[ 26.162720] sp : ffff000013003d60
[ 26.164048] x29: ffff000013003d60 x28: ffff800025048040
[ 26.165804] x27: 0000000000000000 x26: ffff800025048040
[ 26.167352] x25: 00000000000000c0 x24: ffff0000112c2820
[ 26.169682] x23: 0000000000000000 x22: 0000000020000080
[ 26.171899] x21: ffff80002143b418 x20: ffff80002143b400
[ 26.174236] x19: ffff80002143b280 x18: 0000000000000000
[ 26.176607] x17: 0000000000000000 x16: 0000000000000000
[ 26.178997] x15: 0000000000000000 x14: 0000000000000000
[ 26.181508] x13: 00009178a5e077b2 x12: 0000000000000001
[ 26.183863] x11: 0000000000000000 x10: 0000000000000980
[ 26.186437] x9 : ffff000013003a80 x8 : ffff800025048a20
[ 26.189006] x7 : ffff8000250481c0 x6 : ffff80002ffe9118
[ 26.191359] x5 : ffff80002ffe9118 x4 : 0000000000000000
[ 26.193863] x3 : ffff80002ffefe98 x2 : 44c06ddd107d1f00
[ 26.196642] x1 : 0000000000000000 x0 : 000000000000003e
[ 26.198892] Call trace:
[ 26.199893] io_mem_free+0x9c/0xa8
[ 26.201155] io_ring_ctx_wait_and_kill+0xec/0x180
[ 26.202688] io_uring_setup+0x6c4/0x6f0
[ 26.204091] __arm64_sys_io_uring_setup+0x18/0x20
[ 26.205576] el0_svc_common.constprop.0+0x7c/0xe8
[ 26.207186] el0_svc_handler+0x28/0x78
[ 26.208389] el0_svc+0x8/0xc
[ 26.209408] Code: aa0203e0 d0006861 9133a021 97fcdc3c (d4210000)
[ 26.211995] ---[ end trace bdb81cd43a21e50d ]---
[ 81.770626] ------------[ cut here ]------------
[ 81.825015] virt_to_phys used for non-linear address: 000000000d42f2c7 ( (null))
[ 81.827860] WARNING: CPU: 1 PID: 30171 at arch/arm64/mm/physaddr.c:15 __virt_to_phys+0x48/0x68
[ 81.831202] Modules linked in:
[ 81.832212] CPU: 1 PID: 30171 Comm: syz-executor.20 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #19
[ 81.835616] Hardware name: linux,dummy-virt (DT)
[ 81.836863] pstate: 60400005 (nZCv daif +PAN -UAO)
[ 81.838727] pc : __virt_to_phys+0x48/0x68
[ 81.840572] lr : __virt_to_phys+0x48/0x68
[ 81.842264] sp : ffff80002cf67c70
[ 81.843858] x29: ffff80002cf67c70 x28: ffff800014358e18
[ 81.846463] x27: 0000000000000000 x26: 0000000020000080
[ 81.849148] x25: 0000000000000000 x24: ffff80001bb01f40
[ 81.851986] x23: ffff200011db06c8 x22: ffff2000127e3c60
[ 81.854351] x21: ffff800014358cc0 x20: ffff800014358d98
[ 81.856711] x19: 0000000000000000 x18: 0000000000000000
[ 81.859132] x17: 0000000000000000 x16: 0000000000000000
[ 81.861586] x15: 0000000000000000 x14: 0000000000000000
[ 81.863905] x13: 0000000000000000 x12: ffff1000037603e9
[ 81.866226] x11: 1ffff000037603e8 x10: 0000000000000980
[ 81.868776] x9 : ffff80002cf67840 x8 : ffff80001bb02920
[ 81.873272] x7 : ffff1000037603e9 x6 : ffff80001bb01f47
[ 81.875266] x5 : ffff1000037603e9 x4 : dfff200000000000
[ 81.876875] x3 : ffff200010087528 x2 : ffff1000059ecf58
[ 81.878751] x1 : 44c06ddd107d1f00 x0 : 0000000000000000
[ 81.880453] Call trace:
[ 81.881164] __virt_to_phys+0x48/0x68
[ 81.882919] io_mem_free+0x18/0x110
[ 81.886585] io_ring_ctx_wait_and_kill+0x13c/0x1f0
[ 81.891212] io_uring_setup+0xa60/0xad0
[ 81.892881] __arm64_sys_io_uring_setup+0x2c/0x38
[ 81.894398] el0_svc_common.constprop.0+0xac/0x150
[ 81.896306] el0_svc_handler+0x34/0x88
[ 81.897744] el0_svc+0x8/0xc
[ 81.898715] ---[ end trace b4a703802243cbba ]---
Fixes: 2b188cc1bb857a9d ("Add io_uring IO interface")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-block@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-30 19:30:21 +03:00
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
if (!ptr)
|
|
|
|
return;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
io_uring: free allocated io_memory once
If io_allocate_scq_urings() fails to allocate an sq_* region, it will
call io_mem_free() for any previously allocated regions, but leave
dangling pointers to these regions in the ctx. Any regions which have
not yet been allocated are left NULL. Note that when returning
-EOVERFLOW, the previously allocated sq_ring is not freed, which appears
to be an unintentional leak.
When io_allocate_scq_urings() fails, io_uring_create() will call
io_ring_ctx_wait_and_kill(), which calls io_mem_free() on all the sq_*
regions, assuming the pointers are valid and not NULL.
This can result in pages being freed multiple times, which has been
observed to corrupt the page state, leading to subsequent fun. This can
also result in virt_to_page() on NULL, resulting in the use of bogus
page addresses, and yet more subsequent fun. The latter can be detected
with CONFIG_DEBUG_VIRTUAL on arm64.
Adding a cleanup path to io_allocate_scq_urings() complicates the logic,
so let's leave it to io_ring_ctx_free() to consistently free these
pointers, and simplify the io_allocate_scq_urings() error paths.
Full splats from before this patch below. Note that the pointer logged
by the DEBUG_VIRTUAL "non-linear address" warning has been hashed, and
is actually NULL.
[ 26.098129] page:ffff80000e949a00 count:0 mapcount:-128 mapping:0000000000000000 index:0x0
[ 26.102976] flags: 0x63fffc000000()
[ 26.104373] raw: 000063fffc000000 ffff80000e86c188 ffff80000ea3df08 0000000000000000
[ 26.108917] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
[ 26.137235] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
[ 26.143960] ------------[ cut here ]------------
[ 26.146020] kernel BUG at include/linux/mm.h:547!
[ 26.147586] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 26.149163] Modules linked in:
[ 26.150287] Process syz-executor.21 (pid: 20204, stack limit = 0x000000000e9cefeb)
[ 26.153307] CPU: 2 PID: 20204 Comm: syz-executor.21 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #18
[ 26.156566] Hardware name: linux,dummy-virt (DT)
[ 26.158089] pstate: 40400005 (nZcv daif +PAN -UAO)
[ 26.159869] pc : io_mem_free+0x9c/0xa8
[ 26.161436] lr : io_mem_free+0x9c/0xa8
[ 26.162720] sp : ffff000013003d60
[ 26.164048] x29: ffff000013003d60 x28: ffff800025048040
[ 26.165804] x27: 0000000000000000 x26: ffff800025048040
[ 26.167352] x25: 00000000000000c0 x24: ffff0000112c2820
[ 26.169682] x23: 0000000000000000 x22: 0000000020000080
[ 26.171899] x21: ffff80002143b418 x20: ffff80002143b400
[ 26.174236] x19: ffff80002143b280 x18: 0000000000000000
[ 26.176607] x17: 0000000000000000 x16: 0000000000000000
[ 26.178997] x15: 0000000000000000 x14: 0000000000000000
[ 26.181508] x13: 00009178a5e077b2 x12: 0000000000000001
[ 26.183863] x11: 0000000000000000 x10: 0000000000000980
[ 26.186437] x9 : ffff000013003a80 x8 : ffff800025048a20
[ 26.189006] x7 : ffff8000250481c0 x6 : ffff80002ffe9118
[ 26.191359] x5 : ffff80002ffe9118 x4 : 0000000000000000
[ 26.193863] x3 : ffff80002ffefe98 x2 : 44c06ddd107d1f00
[ 26.196642] x1 : 0000000000000000 x0 : 000000000000003e
[ 26.198892] Call trace:
[ 26.199893] io_mem_free+0x9c/0xa8
[ 26.201155] io_ring_ctx_wait_and_kill+0xec/0x180
[ 26.202688] io_uring_setup+0x6c4/0x6f0
[ 26.204091] __arm64_sys_io_uring_setup+0x18/0x20
[ 26.205576] el0_svc_common.constprop.0+0x7c/0xe8
[ 26.207186] el0_svc_handler+0x28/0x78
[ 26.208389] el0_svc+0x8/0xc
[ 26.209408] Code: aa0203e0 d0006861 9133a021 97fcdc3c (d4210000)
[ 26.211995] ---[ end trace bdb81cd43a21e50d ]---
[ 81.770626] ------------[ cut here ]------------
[ 81.825015] virt_to_phys used for non-linear address: 000000000d42f2c7 ( (null))
[ 81.827860] WARNING: CPU: 1 PID: 30171 at arch/arm64/mm/physaddr.c:15 __virt_to_phys+0x48/0x68
[ 81.831202] Modules linked in:
[ 81.832212] CPU: 1 PID: 30171 Comm: syz-executor.20 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #19
[ 81.835616] Hardware name: linux,dummy-virt (DT)
[ 81.836863] pstate: 60400005 (nZCv daif +PAN -UAO)
[ 81.838727] pc : __virt_to_phys+0x48/0x68
[ 81.840572] lr : __virt_to_phys+0x48/0x68
[ 81.842264] sp : ffff80002cf67c70
[ 81.843858] x29: ffff80002cf67c70 x28: ffff800014358e18
[ 81.846463] x27: 0000000000000000 x26: 0000000020000080
[ 81.849148] x25: 0000000000000000 x24: ffff80001bb01f40
[ 81.851986] x23: ffff200011db06c8 x22: ffff2000127e3c60
[ 81.854351] x21: ffff800014358cc0 x20: ffff800014358d98
[ 81.856711] x19: 0000000000000000 x18: 0000000000000000
[ 81.859132] x17: 0000000000000000 x16: 0000000000000000
[ 81.861586] x15: 0000000000000000 x14: 0000000000000000
[ 81.863905] x13: 0000000000000000 x12: ffff1000037603e9
[ 81.866226] x11: 1ffff000037603e8 x10: 0000000000000980
[ 81.868776] x9 : ffff80002cf67840 x8 : ffff80001bb02920
[ 81.873272] x7 : ffff1000037603e9 x6 : ffff80001bb01f47
[ 81.875266] x5 : ffff1000037603e9 x4 : dfff200000000000
[ 81.876875] x3 : ffff200010087528 x2 : ffff1000059ecf58
[ 81.878751] x1 : 44c06ddd107d1f00 x0 : 0000000000000000
[ 81.880453] Call trace:
[ 81.881164] __virt_to_phys+0x48/0x68
[ 81.882919] io_mem_free+0x18/0x110
[ 81.886585] io_ring_ctx_wait_and_kill+0x13c/0x1f0
[ 81.891212] io_uring_setup+0xa60/0xad0
[ 81.892881] __arm64_sys_io_uring_setup+0x2c/0x38
[ 81.894398] el0_svc_common.constprop.0+0xac/0x150
[ 81.896306] el0_svc_handler+0x34/0x88
[ 81.897744] el0_svc+0x8/0xc
[ 81.898715] ---[ end trace b4a703802243cbba ]---
Fixes: 2b188cc1bb857a9d ("Add io_uring IO interface")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-block@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-30 19:30:21 +03:00
|
|
|
page = virt_to_head_page(ptr);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
if (put_page_testzero(page))
|
|
|
|
free_compound_page(page);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *io_mem_alloc(size_t size)
|
|
|
|
{
|
2022-01-25 08:17:36 +03:00
|
|
|
gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-01-25 08:17:36 +03:00
|
|
|
return (void *) __get_free_pages(gfp, get_order(size));
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2022-04-26 21:21:25 +03:00
|
|
|
static unsigned long rings_size(struct io_ring_ctx *ctx, unsigned int sq_entries,
|
|
|
|
unsigned int cq_entries, size_t *sq_offset)
|
2019-08-26 20:23:46 +03:00
|
|
|
{
|
|
|
|
struct io_rings *rings;
|
|
|
|
size_t off, sq_array_size;
|
|
|
|
|
|
|
|
off = struct_size(rings, cqes, cq_entries);
|
|
|
|
if (off == SIZE_MAX)
|
|
|
|
return SIZE_MAX;
|
2022-04-26 21:21:25 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
if (check_shl_overflow(off, 1, &off))
|
|
|
|
return SIZE_MAX;
|
|
|
|
}
|
2019-08-26 20:23:46 +03:00
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
off = ALIGN(off, SMP_CACHE_BYTES);
|
|
|
|
if (off == 0)
|
|
|
|
return SIZE_MAX;
|
|
|
|
#endif
|
|
|
|
|
2020-07-11 12:31:11 +03:00
|
|
|
if (sq_offset)
|
|
|
|
*sq_offset = off;
|
|
|
|
|
2019-08-26 20:23:46 +03:00
|
|
|
sq_array_size = array_size(sizeof(u32), sq_entries);
|
|
|
|
if (sq_array_size == SIZE_MAX)
|
|
|
|
return SIZE_MAX;
|
|
|
|
|
|
|
|
if (check_add_overflow(off, sq_array_size, &off))
|
|
|
|
return SIZE_MAX;
|
|
|
|
|
|
|
|
return off;
|
|
|
|
}
|
|
|
|
|
2021-04-25 16:32:23 +03:00
|
|
|
static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_mapped_ubuf **slot)
|
2021-04-11 03:46:35 +03:00
|
|
|
{
|
2021-04-25 16:32:23 +03:00
|
|
|
struct io_mapped_ubuf *imu = *slot;
|
2021-04-11 03:46:35 +03:00
|
|
|
unsigned int i;
|
|
|
|
|
2021-04-28 15:11:29 +03:00
|
|
|
if (imu != ctx->dummy_ubuf) {
|
|
|
|
for (i = 0; i < imu->nr_bvecs; i++)
|
|
|
|
unpin_user_page(imu->bvec[i].bv_page);
|
|
|
|
if (imu->acct_pages)
|
|
|
|
io_unaccount_mem(ctx, imu->acct_pages);
|
|
|
|
kvfree(imu);
|
|
|
|
}
|
2021-04-25 16:32:23 +03:00
|
|
|
*slot = NULL;
|
2021-04-11 03:46:35 +03:00
|
|
|
}
|
|
|
|
|
2021-04-25 16:32:25 +03:00
|
|
|
static void io_rsrc_buf_put(struct io_ring_ctx *ctx, struct io_rsrc_put *prsrc)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
{
|
2021-04-25 16:32:26 +03:00
|
|
|
io_buffer_unmap(ctx, &prsrc->buf);
|
|
|
|
prsrc->buf = NULL;
|
2021-04-25 16:32:25 +03:00
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-04-25 16:32:25 +03:00
|
|
|
static void __io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-04-11 03:46:35 +03:00
|
|
|
for (i = 0; i < ctx->nr_user_bufs; i++)
|
|
|
|
io_buffer_unmap(ctx, &ctx->user_bufs[i]);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
kfree(ctx->user_bufs);
|
2021-04-30 11:25:15 +03:00
|
|
|
io_rsrc_data_free(ctx->buf_data);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
ctx->user_bufs = NULL;
|
2021-04-25 16:32:25 +03:00
|
|
|
ctx->buf_data = NULL;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
ctx->nr_user_bufs = 0;
|
2021-04-25 16:32:25 +03:00
|
|
|
}
|
|
|
|
|
2021-01-06 23:39:10 +03:00
|
|
|
static int io_sqe_buffers_unregister(struct io_ring_ctx *ctx)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
{
|
2022-06-13 08:30:06 +03:00
|
|
|
unsigned nr = ctx->nr_user_bufs;
|
2021-04-25 16:32:25 +03:00
|
|
|
int ret;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-04-25 16:32:25 +03:00
|
|
|
if (!ctx->buf_data)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
return -ENXIO;
|
|
|
|
|
2022-06-13 08:30:06 +03:00
|
|
|
/*
|
|
|
|
* Quiesce may unlock ->uring_lock, and while it's not held
|
|
|
|
* prevent new requests using the table.
|
|
|
|
*/
|
|
|
|
ctx->nr_user_bufs = 0;
|
2021-04-25 16:32:25 +03:00
|
|
|
ret = io_rsrc_ref_quiesce(ctx->buf_data, ctx);
|
2022-06-13 08:30:06 +03:00
|
|
|
ctx->nr_user_bufs = nr;
|
2021-04-25 16:32:25 +03:00
|
|
|
if (!ret)
|
|
|
|
__io_sqe_buffers_unregister(ctx);
|
|
|
|
return ret;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_copy_iov(struct io_ring_ctx *ctx, struct iovec *dst,
|
|
|
|
void __user *arg, unsigned index)
|
|
|
|
{
|
|
|
|
struct iovec __user *src;
|
|
|
|
|
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (ctx->compat) {
|
|
|
|
struct compat_iovec __user *ciovs;
|
|
|
|
struct compat_iovec ciov;
|
|
|
|
|
|
|
|
ciovs = (struct compat_iovec __user *) arg;
|
|
|
|
if (copy_from_user(&ciov, &ciovs[index], sizeof(ciov)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2019-12-12 02:12:15 +03:00
|
|
|
dst->iov_base = u64_to_user_ptr((u64)ciov.iov_base);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
dst->iov_len = ciov.iov_len;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
src = (struct iovec __user *) arg;
|
|
|
|
if (copy_from_user(dst, &src[index], sizeof(*dst)))
|
|
|
|
return -EFAULT;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-09-18 01:19:16 +03:00
|
|
|
/*
|
|
|
|
* Not super efficient, but this is just a registration time. And we do cache
|
|
|
|
* the last compound head, so generally we'll only do a full search if we don't
|
|
|
|
* match that one.
|
|
|
|
*
|
|
|
|
* We check if the given compound head page has already been accounted, to
|
|
|
|
* avoid double accounting it. This allows us to account the full size of the
|
|
|
|
* page, not just the constituent pages of a huge page.
|
|
|
|
*/
|
|
|
|
static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages,
|
|
|
|
int nr_pages, struct page *hpage)
|
|
|
|
{
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
/* check current page array */
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
if (!PageCompound(pages[i]))
|
|
|
|
continue;
|
|
|
|
if (compound_head(pages[i]) == hpage)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* check previously registered pages */
|
|
|
|
for (i = 0; i < ctx->nr_user_bufs; i++) {
|
2021-04-25 16:32:23 +03:00
|
|
|
struct io_mapped_ubuf *imu = ctx->user_bufs[i];
|
2020-09-18 01:19:16 +03:00
|
|
|
|
|
|
|
for (j = 0; j < imu->nr_bvecs; j++) {
|
|
|
|
if (!PageCompound(imu->bvec[j].bv_page))
|
|
|
|
continue;
|
|
|
|
if (compound_head(imu->bvec[j].bv_page) == hpage)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages,
|
|
|
|
int nr_pages, struct io_mapped_ubuf *imu,
|
|
|
|
struct page **last_hpage)
|
|
|
|
{
|
|
|
|
int i, ret;
|
|
|
|
|
2021-05-29 14:01:02 +03:00
|
|
|
imu->acct_pages = 0;
|
2020-09-18 01:19:16 +03:00
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
if (!PageCompound(pages[i])) {
|
|
|
|
imu->acct_pages++;
|
|
|
|
} else {
|
|
|
|
struct page *hpage;
|
|
|
|
|
|
|
|
hpage = compound_head(pages[i]);
|
|
|
|
if (hpage == *last_hpage)
|
|
|
|
continue;
|
|
|
|
*last_hpage = hpage;
|
|
|
|
if (headpage_already_acct(ctx, pages, i, hpage))
|
|
|
|
continue;
|
|
|
|
imu->acct_pages += page_size(hpage) >> PAGE_SHIFT;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!imu->acct_pages)
|
|
|
|
return 0;
|
|
|
|
|
2021-02-10 06:14:12 +03:00
|
|
|
ret = io_account_mem(ctx, imu->acct_pages);
|
2020-09-18 01:19:16 +03:00
|
|
|
if (ret)
|
|
|
|
imu->acct_pages = 0;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2022-04-28 22:02:27 +03:00
|
|
|
static struct page **io_pin_pages(unsigned long ubuf, unsigned long len,
|
|
|
|
int *npages)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
{
|
2022-04-28 22:02:27 +03:00
|
|
|
unsigned long start, end, nr_pages;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
struct vm_area_struct **vmas = NULL;
|
|
|
|
struct page **pages = NULL;
|
2022-04-28 22:02:27 +03:00
|
|
|
int i, pret, ret = -ENOMEM;
|
2021-04-28 15:11:29 +03:00
|
|
|
|
2022-04-28 22:02:27 +03:00
|
|
|
end = (ubuf + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
2021-01-06 23:39:10 +03:00
|
|
|
start = ubuf >> PAGE_SHIFT;
|
|
|
|
nr_pages = end - start;
|
|
|
|
|
|
|
|
pages = kvmalloc_array(nr_pages, sizeof(struct page *), GFP_KERNEL);
|
|
|
|
if (!pages)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
vmas = kvmalloc_array(nr_pages, sizeof(struct vm_area_struct *),
|
|
|
|
GFP_KERNEL);
|
|
|
|
if (!vmas)
|
|
|
|
goto done;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-01-06 23:39:10 +03:00
|
|
|
ret = 0;
|
|
|
|
mmap_read_lock(current->mm);
|
|
|
|
pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
|
|
|
|
pages, vmas);
|
|
|
|
if (pret == nr_pages) {
|
|
|
|
/* don't support file backed memory */
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
struct vm_area_struct *vma = vmas[i];
|
|
|
|
|
2021-06-09 17:26:54 +03:00
|
|
|
if (vma_is_shmem(vma))
|
|
|
|
continue;
|
2021-01-06 23:39:10 +03:00
|
|
|
if (vma->vm_file &&
|
|
|
|
!is_file_hugepages(vma->vm_file)) {
|
|
|
|
ret = -EOPNOTSUPP;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2022-04-28 22:02:27 +03:00
|
|
|
*npages = nr_pages;
|
2021-01-06 23:39:10 +03:00
|
|
|
} else {
|
|
|
|
ret = pret < 0 ? pret : -EFAULT;
|
|
|
|
}
|
|
|
|
mmap_read_unlock(current->mm);
|
|
|
|
if (ret) {
|
|
|
|
/*
|
|
|
|
* if we did partial map, or found file backed vmas,
|
|
|
|
* release any pages we did get
|
|
|
|
*/
|
|
|
|
if (pret > 0)
|
|
|
|
unpin_user_pages(pages, pret);
|
|
|
|
goto done;
|
|
|
|
}
|
2022-04-28 22:02:27 +03:00
|
|
|
ret = 0;
|
|
|
|
done:
|
|
|
|
kvfree(vmas);
|
|
|
|
if (ret < 0) {
|
|
|
|
kvfree(pages);
|
|
|
|
pages = ERR_PTR(ret);
|
|
|
|
}
|
|
|
|
return pages;
|
|
|
|
}
|
2021-01-06 23:39:10 +03:00
|
|
|
|
2022-04-28 22:02:27 +03:00
|
|
|
static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
|
|
|
|
struct io_mapped_ubuf **pimu,
|
|
|
|
struct page **last_hpage)
|
|
|
|
{
|
|
|
|
struct io_mapped_ubuf *imu = NULL;
|
|
|
|
struct page **pages = NULL;
|
|
|
|
unsigned long off;
|
|
|
|
size_t size;
|
|
|
|
int ret, nr_pages, i;
|
|
|
|
|
|
|
|
if (!iov->iov_base) {
|
|
|
|
*pimu = ctx->dummy_ubuf;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
*pimu = NULL;
|
|
|
|
ret = -ENOMEM;
|
|
|
|
|
|
|
|
pages = io_pin_pages((unsigned long) iov->iov_base, iov->iov_len,
|
|
|
|
&nr_pages);
|
|
|
|
if (IS_ERR(pages)) {
|
|
|
|
ret = PTR_ERR(pages);
|
|
|
|
pages = NULL;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
imu = kvmalloc(struct_size(imu, bvec, nr_pages), GFP_KERNEL);
|
|
|
|
if (!imu)
|
|
|
|
goto done;
|
2021-01-06 23:39:10 +03:00
|
|
|
|
2022-04-28 22:02:27 +03:00
|
|
|
ret = io_buffer_account_pin(ctx, pages, nr_pages, imu, last_hpage);
|
2021-01-06 23:39:10 +03:00
|
|
|
if (ret) {
|
2022-04-28 22:02:27 +03:00
|
|
|
unpin_user_pages(pages, nr_pages);
|
2021-01-06 23:39:10 +03:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2022-04-28 22:02:27 +03:00
|
|
|
off = (unsigned long) iov->iov_base & ~PAGE_MASK;
|
2021-01-06 23:39:10 +03:00
|
|
|
size = iov->iov_len;
|
|
|
|
for (i = 0; i < nr_pages; i++) {
|
|
|
|
size_t vec_len;
|
|
|
|
|
|
|
|
vec_len = min_t(size_t, size, PAGE_SIZE - off);
|
|
|
|
imu->bvec[i].bv_page = pages[i];
|
|
|
|
imu->bvec[i].bv_len = vec_len;
|
|
|
|
imu->bvec[i].bv_offset = off;
|
|
|
|
off = 0;
|
|
|
|
size -= vec_len;
|
|
|
|
}
|
|
|
|
/* store original address for later verification */
|
2022-04-28 22:02:27 +03:00
|
|
|
imu->ubuf = (unsigned long) iov->iov_base;
|
|
|
|
imu->ubuf_end = imu->ubuf + iov->iov_len;
|
2021-01-06 23:39:10 +03:00
|
|
|
imu->nr_bvecs = nr_pages;
|
2021-04-25 16:32:23 +03:00
|
|
|
*pimu = imu;
|
2021-01-06 23:39:10 +03:00
|
|
|
ret = 0;
|
|
|
|
done:
|
2021-04-25 16:32:23 +03:00
|
|
|
if (ret)
|
|
|
|
kvfree(imu);
|
2021-01-06 23:39:10 +03:00
|
|
|
kvfree(pages);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-01-06 23:39:11 +03:00
|
|
|
static int io_buffers_map_alloc(struct io_ring_ctx *ctx, unsigned int nr_args)
|
2021-01-06 23:39:10 +03:00
|
|
|
{
|
2021-04-11 03:46:36 +03:00
|
|
|
ctx->user_bufs = kcalloc(nr_args, sizeof(*ctx->user_bufs), GFP_KERNEL);
|
|
|
|
return ctx->user_bufs ? 0 : -ENOMEM;
|
2021-01-06 23:39:11 +03:00
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-01-06 23:39:11 +03:00
|
|
|
static int io_buffer_validate(struct iovec *iov)
|
|
|
|
{
|
2021-03-25 01:59:01 +03:00
|
|
|
unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1);
|
|
|
|
|
2021-01-06 23:39:11 +03:00
|
|
|
/*
|
|
|
|
* Don't impose further limits on the size and buffer
|
|
|
|
* constraints here, we'll -EINVAL later when IO is
|
|
|
|
* submitted if they are wrong.
|
|
|
|
*/
|
2021-04-28 15:11:29 +03:00
|
|
|
if (!iov->iov_base)
|
|
|
|
return iov->iov_len ? -EFAULT : 0;
|
|
|
|
if (!iov->iov_len)
|
2021-01-06 23:39:11 +03:00
|
|
|
return -EFAULT;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-01-06 23:39:11 +03:00
|
|
|
/* arbitrary limit, but we need something */
|
|
|
|
if (iov->iov_len > SZ_1G)
|
|
|
|
return -EFAULT;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-03-25 01:59:01 +03:00
|
|
|
if (check_add_overflow((unsigned long)iov->iov_base, acct_len, &tmp))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
2021-01-06 23:39:11 +03:00
|
|
|
return 0;
|
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-01-06 23:39:11 +03:00
|
|
|
static int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg,
|
2021-04-25 16:32:26 +03:00
|
|
|
unsigned int nr_args, u64 __user *tags)
|
2021-01-06 23:39:11 +03:00
|
|
|
{
|
2021-04-25 16:32:25 +03:00
|
|
|
struct page *last_hpage = NULL;
|
|
|
|
struct io_rsrc_data *data;
|
2021-01-06 23:39:11 +03:00
|
|
|
int i, ret;
|
|
|
|
struct iovec iov;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-04-11 03:46:36 +03:00
|
|
|
if (ctx->user_bufs)
|
|
|
|
return -EBUSY;
|
2021-05-14 14:06:44 +03:00
|
|
|
if (!nr_args || nr_args > IORING_MAX_REG_BUFFERS)
|
2021-04-11 03:46:36 +03:00
|
|
|
return -EINVAL;
|
2021-04-25 16:32:25 +03:00
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
2021-01-06 23:39:11 +03:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-06-14 04:36:18 +03:00
|
|
|
ret = io_rsrc_data_alloc(ctx, io_rsrc_buf_put, tags, nr_args, &data);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2021-04-25 16:32:25 +03:00
|
|
|
ret = io_buffers_map_alloc(ctx, nr_args);
|
|
|
|
if (ret) {
|
2021-04-30 11:25:15 +03:00
|
|
|
io_rsrc_data_free(data);
|
2021-04-25 16:32:25 +03:00
|
|
|
return ret;
|
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-04-11 03:46:36 +03:00
|
|
|
for (i = 0; i < nr_args; i++, ctx->nr_user_bufs++) {
|
2022-05-18 21:13:49 +03:00
|
|
|
if (arg) {
|
|
|
|
ret = io_copy_iov(ctx, &iov, arg, i);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
ret = io_buffer_validate(&iov);
|
|
|
|
if (ret)
|
|
|
|
break;
|
|
|
|
} else {
|
|
|
|
memset(&iov, 0, sizeof(iov));
|
|
|
|
}
|
|
|
|
|
2021-06-14 04:36:21 +03:00
|
|
|
if (!iov.iov_base && *io_get_tag_slot(data, i)) {
|
2021-04-29 13:46:02 +03:00
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
|
2021-04-25 16:32:23 +03:00
|
|
|
ret = io_sqe_buffer_register(ctx, &iov, &ctx->user_bufs[i],
|
|
|
|
&last_hpage);
|
2021-01-06 23:39:10 +03:00
|
|
|
if (ret)
|
|
|
|
break;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
}
|
2021-01-06 23:39:10 +03:00
|
|
|
|
2021-04-25 16:32:25 +03:00
|
|
|
WARN_ON_ONCE(ctx->buf_data);
|
2021-01-06 23:39:10 +03:00
|
|
|
|
2021-04-25 16:32:25 +03:00
|
|
|
ctx->buf_data = data;
|
|
|
|
if (ret)
|
|
|
|
__io_sqe_buffers_unregister(ctx);
|
|
|
|
else
|
|
|
|
io_rsrc_node_switch(ctx, NULL);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-04-25 16:32:26 +03:00
|
|
|
static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_rsrc_update2 *up,
|
|
|
|
unsigned int nr_args)
|
|
|
|
{
|
|
|
|
u64 __user *tags = u64_to_user_ptr(up->tags);
|
|
|
|
struct iovec iov, __user *iovs = u64_to_user_ptr(up->data);
|
|
|
|
struct page *last_hpage = NULL;
|
|
|
|
bool needs_switch = false;
|
|
|
|
__u32 done;
|
|
|
|
int i, err;
|
|
|
|
|
|
|
|
if (!ctx->buf_data)
|
|
|
|
return -ENXIO;
|
|
|
|
if (up->offset + nr_args > ctx->nr_user_bufs)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
for (done = 0; done < nr_args; done++) {
|
2021-04-26 17:17:38 +03:00
|
|
|
struct io_mapped_ubuf *imu;
|
|
|
|
int offset = up->offset + done;
|
2021-04-25 16:32:26 +03:00
|
|
|
u64 tag = 0;
|
|
|
|
|
|
|
|
err = io_copy_iov(ctx, &iov, iovs, done);
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
if (tags && copy_from_user(&tag, &tags[done], sizeof(tag))) {
|
|
|
|
err = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
2021-04-26 17:17:38 +03:00
|
|
|
err = io_buffer_validate(&iov);
|
|
|
|
if (err)
|
|
|
|
break;
|
2021-04-29 13:46:02 +03:00
|
|
|
if (!iov.iov_base && tag) {
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
2021-04-26 17:17:38 +03:00
|
|
|
err = io_sqe_buffer_register(ctx, &iov, &imu, &last_hpage);
|
|
|
|
if (err)
|
|
|
|
break;
|
2021-04-25 16:32:26 +03:00
|
|
|
|
2021-04-26 17:17:38 +03:00
|
|
|
i = array_index_nospec(offset, ctx->nr_user_bufs);
|
2021-04-28 15:11:29 +03:00
|
|
|
if (ctx->user_bufs[i] != ctx->dummy_ubuf) {
|
2022-04-07 16:05:05 +03:00
|
|
|
err = io_queue_rsrc_removal(ctx->buf_data, i,
|
2021-04-26 17:17:38 +03:00
|
|
|
ctx->rsrc_node, ctx->user_bufs[i]);
|
|
|
|
if (unlikely(err)) {
|
|
|
|
io_buffer_unmap(ctx, &imu);
|
2021-04-25 16:32:26 +03:00
|
|
|
break;
|
2021-04-26 17:17:38 +03:00
|
|
|
}
|
2021-04-25 16:32:26 +03:00
|
|
|
ctx->user_bufs[i] = NULL;
|
|
|
|
needs_switch = true;
|
|
|
|
}
|
|
|
|
|
2021-04-26 17:17:38 +03:00
|
|
|
ctx->user_bufs[i] = imu;
|
2021-06-14 04:36:21 +03:00
|
|
|
*io_get_tag_slot(ctx->buf_data, offset) = tag;
|
2021-04-25 16:32:26 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (needs_switch)
|
|
|
|
io_rsrc_node_switch(ctx, ctx->buf_data);
|
|
|
|
return done ? done : err;
|
|
|
|
}
|
|
|
|
|
2022-02-04 17:51:15 +03:00
|
|
|
static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned int eventfd_async)
|
2019-04-11 20:45:41 +03:00
|
|
|
{
|
2022-02-04 17:51:14 +03:00
|
|
|
struct io_ev_fd *ev_fd;
|
2019-04-11 20:45:41 +03:00
|
|
|
__s32 __user *fds = arg;
|
2022-02-07 19:24:11 +03:00
|
|
|
int fd;
|
2019-04-11 20:45:41 +03:00
|
|
|
|
2022-02-04 17:51:14 +03:00
|
|
|
ev_fd = rcu_dereference_protected(ctx->io_ev_fd,
|
|
|
|
lockdep_is_held(&ctx->uring_lock));
|
|
|
|
if (ev_fd)
|
2019-04-11 20:45:41 +03:00
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (copy_from_user(&fd, fds, sizeof(*fds)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2022-02-04 17:51:14 +03:00
|
|
|
ev_fd = kmalloc(sizeof(*ev_fd), GFP_KERNEL);
|
|
|
|
if (!ev_fd)
|
|
|
|
return -ENOMEM;
|
2021-06-24 17:09:57 +03:00
|
|
|
|
2022-02-04 17:51:14 +03:00
|
|
|
ev_fd->cq_ev_fd = eventfd_ctx_fdget(fd);
|
|
|
|
if (IS_ERR(ev_fd->cq_ev_fd)) {
|
2022-02-07 19:24:11 +03:00
|
|
|
int ret = PTR_ERR(ev_fd->cq_ev_fd);
|
2022-02-04 17:51:14 +03:00
|
|
|
kfree(ev_fd);
|
2019-04-11 20:45:41 +03:00
|
|
|
return ret;
|
|
|
|
}
|
2022-02-04 17:51:15 +03:00
|
|
|
ev_fd->eventfd_async = eventfd_async;
|
2022-03-17 05:03:42 +03:00
|
|
|
ctx->has_evfd = true;
|
2022-02-04 17:51:14 +03:00
|
|
|
rcu_assign_pointer(ctx->io_ev_fd, ev_fd);
|
2022-02-07 19:24:11 +03:00
|
|
|
return 0;
|
2022-02-04 17:51:14 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_eventfd_put(struct rcu_head *rcu)
|
|
|
|
{
|
|
|
|
struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
|
|
|
|
|
|
|
|
eventfd_ctx_put(ev_fd->cq_ev_fd);
|
|
|
|
kfree(ev_fd);
|
2019-04-11 20:45:41 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_eventfd_unregister(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2022-02-04 17:51:14 +03:00
|
|
|
struct io_ev_fd *ev_fd;
|
|
|
|
|
|
|
|
ev_fd = rcu_dereference_protected(ctx->io_ev_fd,
|
|
|
|
lockdep_is_held(&ctx->uring_lock));
|
|
|
|
if (ev_fd) {
|
2022-03-17 05:03:42 +03:00
|
|
|
ctx->has_evfd = false;
|
2022-02-04 17:51:14 +03:00
|
|
|
rcu_assign_pointer(ctx->io_ev_fd, NULL);
|
|
|
|
call_rcu(&ev_fd->rcu, io_eventfd_put);
|
2019-04-11 20:45:41 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -ENXIO;
|
|
|
|
}
|
|
|
|
|
2020-02-24 02:23:11 +03:00
|
|
|
static void io_destroy_buffers(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2022-05-01 19:52:44 +03:00
|
|
|
struct io_buffer_list *bl;
|
|
|
|
unsigned long index;
|
2022-03-18 02:20:10 +03:00
|
|
|
int i;
|
|
|
|
|
2022-05-01 19:52:44 +03:00
|
|
|
for (i = 0; i < BGID_ARRAY; i++) {
|
|
|
|
if (!ctx->io_bl)
|
|
|
|
break;
|
|
|
|
__io_remove_buffers(ctx, &ctx->io_bl[i], -1U);
|
|
|
|
}
|
2022-03-18 02:20:10 +03:00
|
|
|
|
2022-05-01 19:52:44 +03:00
|
|
|
xa_for_each(&ctx->io_bl_xa, index, bl) {
|
|
|
|
xa_erase(&ctx->io_bl_xa, bl->bgid);
|
|
|
|
__io_remove_buffers(ctx, bl, -1U);
|
2022-05-26 20:34:33 +03:00
|
|
|
kfree(bl);
|
2022-03-18 02:20:10 +03:00
|
|
|
}
|
2022-03-09 03:46:52 +03:00
|
|
|
|
|
|
|
while (!list_empty(&ctx->io_buffers_pages)) {
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
page = list_first_entry(&ctx->io_buffers_pages, struct page, lru);
|
|
|
|
list_del_init(&page->lru);
|
|
|
|
__free_page(page);
|
|
|
|
}
|
2020-02-24 02:23:11 +03:00
|
|
|
}
|
|
|
|
|
2021-02-28 01:04:18 +03:00
|
|
|
static void io_req_caches_free(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-08-09 22:18:11 +03:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
2021-10-04 22:02:53 +03:00
|
|
|
int nr = 0;
|
2021-02-10 03:03:17 +03:00
|
|
|
|
2021-02-13 19:09:44 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-08-09 22:18:11 +03:00
|
|
|
io_flush_cached_locked_reqs(ctx, state);
|
2021-02-13 19:09:44 +03:00
|
|
|
|
2022-04-12 17:09:47 +03:00
|
|
|
while (!io_req_cache_empty(ctx)) {
|
2021-09-24 23:59:47 +03:00
|
|
|
struct io_wq_work_node *node;
|
|
|
|
struct io_kiocb *req;
|
2021-02-13 19:09:44 +03:00
|
|
|
|
2021-09-24 23:59:47 +03:00
|
|
|
node = wq_stack_extract(&state->free_list);
|
|
|
|
req = container_of(node, struct io_kiocb, comp_list);
|
|
|
|
kmem_cache_free(req_cachep, req);
|
2021-10-04 22:02:53 +03:00
|
|
|
nr++;
|
2021-09-24 23:59:47 +03:00
|
|
|
}
|
2021-10-04 22:02:53 +03:00
|
|
|
if (nr)
|
|
|
|
percpu_ref_put_many(&ctx->refs, nr);
|
2021-02-13 19:09:44 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2021-08-10 04:44:23 +03:00
|
|
|
static void io_wait_rsrc_data(struct io_rsrc_data *data)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-08-10 04:44:23 +03:00
|
|
|
if (data && !atomic_dec_and_test(&data->refs))
|
2021-04-25 16:32:25 +03:00
|
|
|
wait_for_completion(&data->done);
|
|
|
|
}
|
2021-02-12 06:23:54 +03:00
|
|
|
|
2022-03-15 19:54:08 +03:00
|
|
|
static void io_flush_apoll_cache(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct async_poll *apoll;
|
|
|
|
|
|
|
|
while (!list_empty(&ctx->apoll_cache)) {
|
|
|
|
apoll = list_first_entry(&ctx->apoll_cache, struct async_poll,
|
|
|
|
poll.wait.entry);
|
|
|
|
list_del(&apoll->poll.wait.entry);
|
|
|
|
kfree(apoll);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-02-18 07:03:43 +03:00
|
|
|
io_sq_thread_finish(ctx);
|
2020-09-14 19:45:53 +03:00
|
|
|
|
2021-02-18 07:03:43 +03:00
|
|
|
if (ctx->mm_account) {
|
2020-09-14 19:45:53 +03:00
|
|
|
mmdrop(ctx->mm_account);
|
|
|
|
ctx->mm_account = NULL;
|
2020-06-17 02:36:09 +03:00
|
|
|
}
|
2019-01-09 18:59:42 +03:00
|
|
|
|
2021-10-10 01:14:41 +03:00
|
|
|
io_rsrc_refs_drop(ctx);
|
2021-08-10 04:44:23 +03:00
|
|
|
/* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
|
|
|
|
io_wait_rsrc_data(ctx->buf_data);
|
|
|
|
io_wait_rsrc_data(ctx->file_data);
|
|
|
|
|
2021-02-19 12:19:36 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-08-10 04:44:23 +03:00
|
|
|
if (ctx->buf_data)
|
2021-04-25 16:32:25 +03:00
|
|
|
__io_sqe_buffers_unregister(ctx);
|
2021-08-10 04:44:23 +03:00
|
|
|
if (ctx->file_data)
|
2021-04-13 04:58:38 +03:00
|
|
|
__io_sqe_files_unregister(ctx);
|
2021-04-01 17:43:58 +03:00
|
|
|
if (ctx->rings)
|
|
|
|
__io_cqring_overflow_flush(ctx, true);
|
2019-04-11 20:45:41 +03:00
|
|
|
io_eventfd_unregister(ctx);
|
2022-03-15 19:54:08 +03:00
|
|
|
io_flush_apoll_cache(ctx);
|
2022-02-04 17:51:14 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2020-02-24 02:23:11 +03:00
|
|
|
io_destroy_buffers(ctx);
|
2021-04-20 14:03:32 +03:00
|
|
|
if (ctx->sq_creds)
|
|
|
|
put_cred(ctx->sq_creds);
|
2019-01-09 18:59:42 +03:00
|
|
|
|
2021-04-01 17:43:46 +03:00
|
|
|
/* there are no registered resources left, nobody uses it */
|
|
|
|
if (ctx->rsrc_node)
|
|
|
|
io_rsrc_node_destroy(ctx->rsrc_node);
|
2021-03-19 20:22:36 +03:00
|
|
|
if (ctx->rsrc_backup_node)
|
2021-04-01 17:43:40 +03:00
|
|
|
io_rsrc_node_destroy(ctx->rsrc_backup_node);
|
2021-04-01 17:43:46 +03:00
|
|
|
flush_delayed_work(&ctx->rsrc_put_work);
|
2021-10-06 18:06:47 +03:00
|
|
|
flush_delayed_work(&ctx->fallback_work);
|
2021-04-01 17:43:46 +03:00
|
|
|
|
|
|
|
WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
|
|
|
|
WARN_ON_ONCE(!llist_empty(&ctx->rsrc_put_llist));
|
2019-01-09 18:59:42 +03:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#if defined(CONFIG_UNIX)
|
2019-06-13 00:58:43 +03:00
|
|
|
if (ctx->ring_sock) {
|
|
|
|
ctx->ring_sock->file = NULL; /* so that iput() is called */
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
sock_release(ctx->ring_sock);
|
2019-06-13 00:58:43 +03:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#endif
|
2021-08-29 04:54:38 +03:00
|
|
|
WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2019-08-26 20:23:46 +03:00
|
|
|
io_mem_free(ctx->rings);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
io_mem_free(ctx->sq_sqes);
|
|
|
|
|
|
|
|
percpu_ref_exit(&ctx->refs);
|
|
|
|
free_uid(ctx->user);
|
2021-02-28 01:04:18 +03:00
|
|
|
io_req_caches_free(ctx);
|
2021-02-19 22:33:30 +03:00
|
|
|
if (ctx->hash_map)
|
|
|
|
io_wq_put_hash(ctx->hash_map);
|
2019-12-05 05:56:40 +03:00
|
|
|
kfree(ctx->cancel_hash);
|
2021-04-28 15:11:29 +03:00
|
|
|
kfree(ctx->dummy_ubuf);
|
2022-05-01 19:52:44 +03:00
|
|
|
kfree(ctx->io_bl);
|
|
|
|
xa_destroy(&ctx->io_bl_xa);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
kfree(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __poll_t io_uring_poll(struct file *file, poll_table *wait)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
__poll_t mask = 0;
|
|
|
|
|
2021-10-04 22:02:52 +03:00
|
|
|
poll_wait(file, &ctx->cq_wait, wait);
|
2019-04-25 00:54:17 +03:00
|
|
|
/*
|
|
|
|
* synchronizes with barrier from wq_has_sleeper call in
|
|
|
|
* io_commit_cqring
|
|
|
|
*/
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
smp_rmb();
|
2020-09-03 21:12:41 +03:00
|
|
|
if (!io_sqring_full(ctx))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
mask |= EPOLLOUT | EPOLLWRNORM;
|
2021-02-05 11:34:21 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't flush cqring overflow list here, just do a simple check.
|
|
|
|
* Otherwise there could possible be ABBA deadlock:
|
|
|
|
* CPU0 CPU1
|
|
|
|
* ---- ----
|
|
|
|
* lock(&ctx->uring_lock);
|
|
|
|
* lock(&ep->mtx);
|
|
|
|
* lock(&ctx->uring_lock);
|
|
|
|
* lock(&ep->mtx);
|
|
|
|
*
|
|
|
|
* Users may get EPOLLIN meanwhile seeing nothing in cqring, this
|
|
|
|
* pushs them to do the flush.
|
|
|
|
*/
|
2022-04-21 12:13:43 +03:00
|
|
|
if (io_cqring_events(ctx) ||
|
|
|
|
test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
mask |= EPOLLIN | EPOLLRDNORM;
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2020-12-24 06:02:20 +03:00
|
|
|
static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
|
2020-01-28 20:04:42 +03:00
|
|
|
{
|
2021-02-15 23:40:22 +03:00
|
|
|
const struct cred *creds;
|
2020-01-28 20:04:42 +03:00
|
|
|
|
2021-03-08 17:16:16 +03:00
|
|
|
creds = xa_erase(&ctx->personalities, id);
|
2021-02-15 23:40:22 +03:00
|
|
|
if (creds) {
|
|
|
|
put_cred(creds);
|
2020-12-24 06:02:20 +03:00
|
|
|
return 0;
|
2020-10-15 17:46:24 +03:00
|
|
|
}
|
2020-12-24 06:02:20 +03:00
|
|
|
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2021-03-06 14:02:13 +03:00
|
|
|
struct io_tctx_exit {
|
|
|
|
struct callback_head task_work;
|
|
|
|
struct completion completion;
|
2021-03-06 14:02:15 +03:00
|
|
|
struct io_ring_ctx *ctx;
|
2021-03-06 14:02:13 +03:00
|
|
|
};
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_tctx_exit_cb(struct callback_head *cb)
|
2021-03-06 14:02:13 +03:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
struct io_tctx_exit *work;
|
|
|
|
|
|
|
|
work = container_of(cb, struct io_tctx_exit, task_work);
|
|
|
|
/*
|
|
|
|
* When @in_idle, we're in cancellation and it's racy to remove the
|
|
|
|
* node. It'll be removed by the end of cancellation, just ignore it.
|
|
|
|
*/
|
|
|
|
if (!atomic_read(&tctx->in_idle))
|
2021-06-14 04:36:15 +03:00
|
|
|
io_uring_del_tctx_node((unsigned long)work->ctx);
|
2021-03-06 14:02:13 +03:00
|
|
|
complete(&work->completion);
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
|
2021-04-26 01:34:45 +03:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
|
|
|
|
|
|
|
return req->ctx == data;
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_ring_exit_work(struct work_struct *work)
|
2020-04-10 03:14:00 +03:00
|
|
|
{
|
2021-03-06 14:02:13 +03:00
|
|
|
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, exit_work);
|
2021-03-06 14:02:16 +03:00
|
|
|
unsigned long timeout = jiffies + HZ * 60 * 5;
|
2021-08-09 15:04:17 +03:00
|
|
|
unsigned long interval = HZ / 20;
|
2021-03-06 14:02:13 +03:00
|
|
|
struct io_tctx_exit exit;
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
int ret;
|
2020-04-10 03:14:00 +03:00
|
|
|
|
2020-06-18 00:00:04 +03:00
|
|
|
/*
|
|
|
|
* If we're doing polled IO and end up having requests being
|
|
|
|
* submitted async (out-of-line), then completions can come in while
|
|
|
|
* we're waiting for refs to drop. We need to reap these manually,
|
|
|
|
* as nobody else will be looking for them.
|
|
|
|
*/
|
2020-07-07 16:36:22 +03:00
|
|
|
do {
|
2021-05-17 00:58:04 +03:00
|
|
|
io_uring_try_cancel_requests(ctx, NULL, true);
|
2021-04-26 01:34:45 +03:00
|
|
|
if (ctx->sq_data) {
|
|
|
|
struct io_sq_data *sqd = ctx->sq_data;
|
|
|
|
struct task_struct *tsk;
|
|
|
|
|
|
|
|
io_sq_thread_park(sqd);
|
|
|
|
tsk = sqd->thread;
|
|
|
|
if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
|
|
|
|
io_wq_cancel_cb(tsk->io_uring->io_wq,
|
|
|
|
io_cancel_ctx_cb, ctx, true);
|
|
|
|
io_sq_thread_unpark(sqd);
|
|
|
|
}
|
2021-03-06 14:02:16 +03:00
|
|
|
|
2021-10-04 22:02:53 +03:00
|
|
|
io_req_caches_free(ctx);
|
|
|
|
|
2021-08-09 15:04:17 +03:00
|
|
|
if (WARN_ON_ONCE(time_after(jiffies, timeout))) {
|
|
|
|
/* there is little hope left, don't run it too often */
|
|
|
|
interval = HZ * 60;
|
|
|
|
}
|
|
|
|
} while (!wait_for_completion_timeout(&ctx->ref_comp, interval));
|
2021-03-06 14:02:13 +03:00
|
|
|
|
2021-04-14 15:38:34 +03:00
|
|
|
init_completion(&exit.completion);
|
|
|
|
init_task_work(&exit.task_work, io_tctx_exit_cb);
|
|
|
|
exit.ctx = ctx;
|
2021-04-01 17:43:50 +03:00
|
|
|
/*
|
|
|
|
* Some may use context even when all refs and requests have been put,
|
|
|
|
* and they are free to do so while still holding uring_lock or
|
2021-06-30 23:54:04 +03:00
|
|
|
* completion_lock, see io_req_task_submit(). Apart from other work,
|
2021-04-01 17:43:50 +03:00
|
|
|
* this lock/unlock section also waits them to finish.
|
|
|
|
*/
|
2021-03-06 14:02:13 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
while (!list_empty(&ctx->tctx_list)) {
|
2021-03-06 14:02:16 +03:00
|
|
|
WARN_ON_ONCE(time_after(jiffies, timeout));
|
|
|
|
|
2021-03-06 14:02:13 +03:00
|
|
|
node = list_first_entry(&ctx->tctx_list, struct io_tctx_node,
|
|
|
|
ctx_node);
|
2021-04-14 15:38:34 +03:00
|
|
|
/* don't spin on a single task if cancellation failed */
|
|
|
|
list_rotate_left(&ctx->tctx_list);
|
2021-03-06 14:02:13 +03:00
|
|
|
ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
|
|
|
|
if (WARN_ON_ONCE(ret))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
wait_for_completion(&exit.completion);
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-06 14:02:13 +03:00
|
|
|
|
2020-04-10 03:14:00 +03:00
|
|
|
io_ring_ctx_free(ctx);
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2021-03-08 17:16:16 +03:00
|
|
|
unsigned long index;
|
|
|
|
struct creds *creds;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
percpu_ref_kill(&ctx->refs);
|
2020-12-07 01:22:44 +03:00
|
|
|
if (ctx->rings)
|
2021-02-23 15:40:22 +03:00
|
|
|
__io_cqring_overflow_flush(ctx, true);
|
2021-03-08 17:16:16 +03:00
|
|
|
xa_for_each(&ctx->personalities, index, creds)
|
|
|
|
io_unregister_personality(ctx, index);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
2022-03-22 01:02:20 +03:00
|
|
|
/* failed during ring init, it couldn't have issued any requests */
|
|
|
|
if (ctx->rings) {
|
|
|
|
io_kill_timeouts(ctx, NULL, true);
|
|
|
|
io_poll_remove_all(ctx, NULL, true);
|
|
|
|
/* if we failed setting up the ctx, we might not have any rings */
|
|
|
|
io_iopoll_try_reap_events(ctx);
|
|
|
|
}
|
2020-07-10 18:13:34 +03:00
|
|
|
|
2020-04-10 03:14:00 +03:00
|
|
|
INIT_WORK(&ctx->exit_work, io_ring_exit_work);
|
2020-08-19 20:10:51 +03:00
|
|
|
/*
|
|
|
|
* Use system_unbound_wq to avoid spawning tons of event kworkers
|
|
|
|
* if we're exiting a ton of rings at the same time. It just adds
|
|
|
|
* noise and overhead, there's no discernable change in runtime
|
|
|
|
* over using system_wq.
|
|
|
|
*/
|
|
|
|
queue_work(system_unbound_wq, &ctx->exit_work);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
file->private_data = NULL;
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-06 16:00:26 +03:00
|
|
|
struct io_task_cancel {
|
|
|
|
struct task_struct *task;
|
2021-05-17 00:58:04 +03:00
|
|
|
bool all;
|
2020-11-06 16:00:26 +03:00
|
|
|
};
|
2020-08-13 02:33:30 +03:00
|
|
|
|
2020-11-06 16:00:26 +03:00
|
|
|
static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
|
2020-08-16 18:23:05 +03:00
|
|
|
{
|
2020-11-06 01:31:37 +03:00
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2020-11-06 16:00:26 +03:00
|
|
|
struct io_task_cancel *cancel = data;
|
2020-11-06 01:31:37 +03:00
|
|
|
|
2021-11-26 17:38:15 +03:00
|
|
|
return io_match_task_safe(req, cancel->task, cancel->all);
|
2020-08-16 18:23:05 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold bool io_cancel_defer_files(struct io_ring_ctx *ctx,
|
|
|
|
struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2020-09-06 00:45:14 +03:00
|
|
|
{
|
2021-03-12 02:29:35 +03:00
|
|
|
struct io_defer_entry *de;
|
2020-09-06 00:45:14 +03:00
|
|
|
LIST_HEAD(list);
|
|
|
|
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-09-06 00:45:14 +03:00
|
|
|
list_for_each_entry_reverse(de, &ctx->defer_list, list) {
|
2021-11-26 17:38:15 +03:00
|
|
|
if (io_match_task_safe(de->req, task, cancel_all)) {
|
2020-09-06 00:45:14 +03:00
|
|
|
list_cut_position(&list, &ctx->defer_list, &de->list);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2021-08-11 00:18:27 +03:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-12 02:29:35 +03:00
|
|
|
if (list_empty(&list))
|
|
|
|
return false;
|
2020-09-06 00:45:14 +03:00
|
|
|
|
|
|
|
while (!list_empty(&list)) {
|
|
|
|
de = list_first_entry(&list, struct io_defer_entry, list);
|
|
|
|
list_del_init(&de->list);
|
2021-03-01 01:35:12 +03:00
|
|
|
io_req_complete_failed(de->req, -ECANCELED);
|
2020-09-06 00:45:14 +03:00
|
|
|
kfree(de);
|
|
|
|
}
|
2021-03-12 02:29:35 +03:00
|
|
|
return true;
|
2020-09-06 00:45:14 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx)
|
2021-03-06 14:02:17 +03:00
|
|
|
{
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
enum io_wq_cancel cret;
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
|
|
|
|
struct io_uring_task *tctx = node->task->io_uring;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* io_wq will stay alive while we hold uring_lock, because it's
|
|
|
|
* killed after ctx nodes, which requires to take the lock.
|
|
|
|
*/
|
|
|
|
if (!tctx || !tctx->io_wq)
|
|
|
|
continue;
|
|
|
|
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true);
|
|
|
|
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
|
|
|
struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2021-02-04 16:51:56 +03:00
|
|
|
{
|
2021-05-17 00:58:04 +03:00
|
|
|
struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
|
2021-03-06 14:02:17 +03:00
|
|
|
struct io_uring_task *tctx = task ? task->io_uring : NULL;
|
2021-02-04 16:51:56 +03:00
|
|
|
|
2022-03-22 01:02:20 +03:00
|
|
|
/* failed during ring init, it couldn't have issued any requests */
|
|
|
|
if (!ctx->rings)
|
|
|
|
return;
|
|
|
|
|
2021-02-04 16:51:56 +03:00
|
|
|
while (1) {
|
|
|
|
enum io_wq_cancel cret;
|
|
|
|
bool ret = false;
|
|
|
|
|
2021-03-06 14:02:17 +03:00
|
|
|
if (!task) {
|
|
|
|
ret |= io_uring_try_cancel_iowq(ctx);
|
|
|
|
} else if (tctx && tctx->io_wq) {
|
|
|
|
/*
|
|
|
|
* Cancels requests of all rings, not only @ctx, but
|
|
|
|
* it's fine as the task is in exit/exec.
|
|
|
|
*/
|
2021-02-16 22:56:50 +03:00
|
|
|
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
|
2021-02-04 16:51:56 +03:00
|
|
|
&cancel, true);
|
|
|
|
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* SQPOLL thread does its own polling */
|
2021-05-17 00:58:04 +03:00
|
|
|
if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) ||
|
2021-03-11 20:49:20 +03:00
|
|
|
(ctx->sq_data && ctx->sq_data->thread == current)) {
|
2021-09-24 23:59:49 +03:00
|
|
|
while (!wq_list_empty(&ctx->iopoll_list)) {
|
2021-02-04 16:51:56 +03:00
|
|
|
io_iopoll_try_reap_events(ctx);
|
|
|
|
ret = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-05-17 00:58:04 +03:00
|
|
|
ret |= io_cancel_defer_files(ctx, task, cancel_all);
|
|
|
|
ret |= io_poll_remove_all(ctx, task, cancel_all);
|
|
|
|
ret |= io_kill_timeouts(ctx, task, cancel_all);
|
2021-06-26 23:40:46 +03:00
|
|
|
if (task)
|
|
|
|
ret |= io_run_task_work();
|
2021-02-04 16:51:56 +03:00
|
|
|
if (!ret)
|
|
|
|
break;
|
|
|
|
cond_resched();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-11 03:46:27 +03:00
|
|
|
static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-12 02:29:38 +03:00
|
|
|
{
|
2021-04-11 03:46:27 +03:00
|
|
|
if (tracked)
|
2022-06-02 08:57:02 +03:00
|
|
|
return atomic_read(&tctx->inflight_tracked);
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-12 02:29:38 +03:00
|
|
|
return percpu_counter_sum(&tctx->inflight);
|
|
|
|
}
|
|
|
|
|
2021-06-14 04:36:23 +03:00
|
|
|
/*
|
|
|
|
* Find any io_uring ctx that this task has registered or done IO on, and cancel
|
2021-12-09 18:54:29 +03:00
|
|
|
* requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
|
2021-06-14 04:36:23 +03:00
|
|
|
*/
|
2022-05-25 18:13:39 +03:00
|
|
|
__cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
|
2021-02-08 01:34:26 +03:00
|
|
|
{
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-12 02:29:38 +03:00
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
2021-04-18 16:52:09 +03:00
|
|
|
struct io_ring_ctx *ctx;
|
2021-02-08 01:34:26 +03:00
|
|
|
s64 inflight;
|
|
|
|
DEFINE_WAIT(wait);
|
2020-10-30 18:37:30 +03:00
|
|
|
|
2021-06-14 04:36:23 +03:00
|
|
|
WARN_ON_ONCE(sqd && sqd->thread != current);
|
|
|
|
|
2021-04-27 15:51:49 +03:00
|
|
|
if (!current->io_uring)
|
|
|
|
return;
|
2021-05-23 17:48:39 +03:00
|
|
|
if (tctx->io_wq)
|
|
|
|
io_wq_exit_start(tctx->io_wq);
|
|
|
|
|
2021-02-08 01:34:26 +03:00
|
|
|
atomic_inc(&tctx->in_idle);
|
|
|
|
do {
|
2021-08-09 15:04:20 +03:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2021-02-08 01:34:26 +03:00
|
|
|
/* read completions before cancelations */
|
2021-06-14 04:36:23 +03:00
|
|
|
inflight = tctx_inflight(tctx, !cancel_all);
|
2021-02-08 01:34:26 +03:00
|
|
|
if (!inflight)
|
|
|
|
break;
|
2020-10-30 18:37:30 +03:00
|
|
|
|
2021-06-14 04:36:23 +03:00
|
|
|
if (!sqd) {
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
unsigned long index;
|
2020-09-13 22:09:39 +03:00
|
|
|
|
2021-06-14 04:36:23 +03:00
|
|
|
xa_for_each(&tctx->xa, index, node) {
|
|
|
|
/* sqpoll task will cancel all its requests */
|
|
|
|
if (node->ctx->sq_data)
|
|
|
|
continue;
|
|
|
|
io_uring_try_cancel_requests(node->ctx, current,
|
|
|
|
cancel_all);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
|
|
|
|
io_uring_try_cancel_requests(ctx, current,
|
|
|
|
cancel_all);
|
|
|
|
}
|
2021-05-23 17:48:39 +03:00
|
|
|
|
2021-12-09 18:54:29 +03:00
|
|
|
prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE);
|
|
|
|
io_run_task_work();
|
2021-08-09 15:04:20 +03:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2021-12-09 18:54:29 +03:00
|
|
|
|
2020-09-13 22:09:39 +03:00
|
|
|
/*
|
2021-01-26 18:28:26 +03:00
|
|
|
* If we've seen completions, retry without waiting. This
|
|
|
|
* avoids a race where a completion comes in before we did
|
|
|
|
* prepare_to_wait().
|
2020-09-13 22:09:39 +03:00
|
|
|
*/
|
2021-05-17 00:58:04 +03:00
|
|
|
if (inflight == tctx_inflight(tctx, !cancel_all))
|
2021-01-26 18:28:26 +03:00
|
|
|
schedule();
|
2020-12-20 16:21:44 +03:00
|
|
|
finish_wait(&tctx->wait, &wait);
|
2020-10-16 01:24:45 +03:00
|
|
|
} while (1);
|
2021-01-04 23:43:29 +03:00
|
|
|
|
2021-02-27 14:16:46 +03:00
|
|
|
io_uring_clean_tctx(tctx);
|
2021-05-17 00:58:04 +03:00
|
|
|
if (cancel_all) {
|
2022-01-09 03:53:22 +03:00
|
|
|
/*
|
|
|
|
* We shouldn't run task_works after cancel, so just leave
|
|
|
|
* ->in_idle set for normal exit.
|
|
|
|
*/
|
|
|
|
atomic_dec(&tctx->in_idle);
|
2021-04-11 03:46:27 +03:00
|
|
|
/* for exec all current's requests should be gone, kill tctx */
|
|
|
|
__io_uring_free(current);
|
|
|
|
}
|
2020-06-15 10:24:04 +03:00
|
|
|
}
|
|
|
|
|
2021-08-12 07:14:35 +03:00
|
|
|
void __io_uring_cancel(bool cancel_all)
|
2021-06-14 04:36:23 +03:00
|
|
|
{
|
2021-08-12 07:14:35 +03:00
|
|
|
io_uring_cancel_generic(cancel_all, NULL);
|
2021-06-14 04:36:23 +03:00
|
|
|
}
|
|
|
|
|
2019-11-28 14:53:22 +03:00
|
|
|
static void *io_uring_validate_mmap_request(struct file *file,
|
|
|
|
loff_t pgoff, size_t sz)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
2019-11-28 14:53:22 +03:00
|
|
|
loff_t offset = pgoff << PAGE_SHIFT;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
struct page *page;
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
switch (offset) {
|
|
|
|
case IORING_OFF_SQ_RING:
|
2019-08-26 20:23:46 +03:00
|
|
|
case IORING_OFF_CQ_RING:
|
|
|
|
ptr = ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
break;
|
|
|
|
case IORING_OFF_SQES:
|
|
|
|
ptr = ctx->sq_sqes;
|
|
|
|
break;
|
|
|
|
default:
|
2019-11-28 14:53:22 +03:00
|
|
|
return ERR_PTR(-EINVAL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
page = virt_to_head_page(ptr);
|
2019-09-24 01:34:25 +03:00
|
|
|
if (sz > page_size(page))
|
2019-11-28 14:53:22 +03:00
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
|
|
|
|
return ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_MMU
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
|
2019-11-28 14:53:22 +03:00
|
|
|
{
|
|
|
|
size_t sz = vma->vm_end - vma->vm_start;
|
|
|
|
unsigned long pfn;
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
ptr = io_uring_validate_mmap_request(file, vma->vm_pgoff, sz);
|
|
|
|
if (IS_ERR(ptr))
|
|
|
|
return PTR_ERR(ptr);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
pfn = virt_to_phys(ptr) >> PAGE_SHIFT;
|
|
|
|
return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
|
|
|
|
}
|
|
|
|
|
2019-11-28 14:53:22 +03:00
|
|
|
#else /* !CONFIG_MMU */
|
|
|
|
|
|
|
|
static int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
return vma->vm_flags & (VM_SHARED | VM_MAYSHARE) ? 0 : -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int io_uring_nommu_mmap_capabilities(struct file *file)
|
|
|
|
{
|
|
|
|
return NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_WRITE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned long io_uring_nommu_get_unmapped_area(struct file *file,
|
|
|
|
unsigned long addr, unsigned long len,
|
|
|
|
unsigned long pgoff, unsigned long flags)
|
|
|
|
{
|
|
|
|
void *ptr;
|
|
|
|
|
|
|
|
ptr = io_uring_validate_mmap_request(file, pgoff, len);
|
|
|
|
if (IS_ERR(ptr))
|
|
|
|
return PTR_ERR(ptr);
|
|
|
|
|
|
|
|
return (unsigned long) ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* !CONFIG_MMU */
|
|
|
|
|
2022-03-22 17:07:56 +03:00
|
|
|
static int io_validate_ext_arg(unsigned flags, const void __user *argp, size_t argsz)
|
|
|
|
{
|
|
|
|
if (flags & IORING_ENTER_EXT_ARG) {
|
|
|
|
struct io_uring_getevents_arg arg;
|
|
|
|
|
|
|
|
if (argsz != sizeof(arg))
|
|
|
|
return -EINVAL;
|
|
|
|
if (copy_from_user(&arg, argp, sizeof(arg)))
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-03 05:54:37 +03:00
|
|
|
static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz,
|
|
|
|
struct __kernel_timespec __user **ts,
|
|
|
|
const sigset_t __user **sig)
|
|
|
|
{
|
|
|
|
struct io_uring_getevents_arg arg;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If EXT_ARG isn't set, then we have no timespec and the argp pointer
|
|
|
|
* is just a pointer to the sigset_t.
|
|
|
|
*/
|
|
|
|
if (!(flags & IORING_ENTER_EXT_ARG)) {
|
|
|
|
*sig = (const sigset_t __user *) argp;
|
|
|
|
*ts = NULL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* EXT_ARG is set - ensure we agree on the size of it and copy in our
|
|
|
|
* timespec and sigset_t pointers if good.
|
|
|
|
*/
|
|
|
|
if (*argsz != sizeof(arg))
|
|
|
|
return -EINVAL;
|
|
|
|
if (copy_from_user(&arg, argp, sizeof(arg)))
|
|
|
|
return -EFAULT;
|
2022-04-12 19:30:42 +03:00
|
|
|
if (arg.pad)
|
|
|
|
return -EINVAL;
|
2020-11-03 05:54:37 +03:00
|
|
|
*sig = u64_to_user_ptr(arg.sigmask);
|
|
|
|
*argsz = arg.sigmask_sz;
|
|
|
|
*ts = u64_to_user_ptr(arg.ts);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
|
2020-11-03 05:54:37 +03:00
|
|
|
u32, min_complete, u32, flags, const void __user *, argp,
|
|
|
|
size_t, argsz)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
struct fd f;
|
2021-03-19 20:22:30 +03:00
|
|
|
long ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2020-07-01 20:29:10 +03:00
|
|
|
io_run_task_work();
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 19:52:41 +03:00
|
|
|
|
2021-03-19 20:22:30 +03:00
|
|
|
if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 18:22:22 +03:00
|
|
|
IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG |
|
|
|
|
IORING_ENTER_REGISTERED_RING)))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 18:22:22 +03:00
|
|
|
/*
|
|
|
|
* Ring fd has been registered via IORING_REGISTER_RING_FDS, we
|
|
|
|
* need only dereference our task private array to find it.
|
|
|
|
*/
|
|
|
|
if (flags & IORING_ENTER_REGISTERED_RING) {
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
|
|
|
if (!tctx || fd >= IO_RINGFD_REG_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
fd = array_index_nospec(fd, IO_RINGFD_REG_MAX);
|
|
|
|
f.file = tctx->registered_rings[fd];
|
2022-05-12 03:30:20 +03:00
|
|
|
f.flags = 0;
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 18:22:22 +03:00
|
|
|
} else {
|
|
|
|
f = fdget(fd);
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-05-12 03:30:20 +03:00
|
|
|
if (unlikely(!f.file))
|
|
|
|
return -EBADF;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
ret = -EOPNOTSUPP;
|
2022-05-25 19:28:04 +03:00
|
|
|
if (unlikely(!io_is_uring_fops(f.file)))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ret = -ENXIO;
|
|
|
|
ctx = f.file->private_data;
|
2021-03-19 20:22:30 +03:00
|
|
|
if (unlikely(!percpu_ref_tryget(&ctx->refs)))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
goto out_fput;
|
|
|
|
|
2020-08-27 17:58:31 +03:00
|
|
|
ret = -EBADFD;
|
2021-03-19 20:22:30 +03:00
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
|
2020-08-27 17:58:31 +03:00
|
|
|
goto out;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
/*
|
|
|
|
* For SQ polling, the thread will do all submissions and completions.
|
|
|
|
* Just return the requested submit count, and wake the thread if
|
|
|
|
* we were asked to.
|
|
|
|
*/
|
2019-09-12 23:19:16 +03:00
|
|
|
ret = 0;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
2021-08-09 22:18:12 +03:00
|
|
|
io_cqring_overflow_flush(ctx);
|
2020-12-17 03:24:39 +03:00
|
|
|
|
2021-08-14 18:04:40 +03:00
|
|
|
if (unlikely(ctx->sq_data->thread == NULL)) {
|
|
|
|
ret = -EOWNERDEAD;
|
2021-03-07 13:54:29 +03:00
|
|
|
goto out;
|
2021-08-14 18:04:40 +03:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
if (flags & IORING_ENTER_SQ_WAKEUP)
|
2020-09-02 22:52:19 +03:00
|
|
|
wake_up(&ctx->sq_data->wait);
|
io_uring: stop SQPOLL submit on creator's death
When the creator of SQPOLL io_uring dies (i.e. sqo_task), we don't want
its internals like ->files and ->mm to be poked by the SQPOLL task, it
have never been nice and recently got racy. That can happen when the
owner undergoes destruction and SQPOLL tasks tries to submit new
requests in parallel, and so calls io_sq_thread_acquire*().
That patch halts SQPOLL submissions when sqo_task dies by introducing
sqo_dead flag. Once set, the SQPOLL task must not do any submission,
which is synchronised by uring_lock as well as the new flag.
The tricky part is to make sure that disabling always happens, that
means either the ring is discovered by creator's do_exit() -> cancel,
or if the final close() happens before it's done by the creator. The
last is guaranteed by the fact that for SQPOLL the creator task and only
it holds exactly one file note, so either it pins up to do_exit() or
removed by the creator on the final put in flush. (see comments in
uring_flush() around file->f_count == 2).
One more place that can trigger io_sq_thread_acquire_*() is
__io_req_task_submit(). Shoot off requests on sqo_dead there, even
though actually we don't need to. That's because cancellation of
sqo_task should wait for the request before going any further.
note 1: io_disable_sqo_submit() does io_ring_set_wakeup_flag() so the
caller would enter the ring to get an error, but it still doesn't
guarantee that the flag won't be cleared.
note 2: if final __userspace__ close happens not from the creator
task, the file note will pin the ring until the task dies.
Fixed: b1b6b5a30dce8 ("kernel/io_uring: cancel io_uring before task works")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-01-08 23:57:25 +03:00
|
|
|
if (flags & IORING_ENTER_SQ_WAIT) {
|
|
|
|
ret = io_sqpoll_wait_sq(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto out;
|
|
|
|
}
|
2022-04-21 12:13:42 +03:00
|
|
|
ret = to_submit;
|
2019-09-12 23:19:16 +03:00
|
|
|
} else if (to_submit) {
|
2021-06-14 04:36:15 +03:00
|
|
|
ret = io_uring_add_tctx_node(ctx);
|
2020-09-13 22:09:39 +03:00
|
|
|
if (unlikely(ret))
|
|
|
|
goto out;
|
2019-12-18 19:53:45 +03:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2022-04-21 12:13:42 +03:00
|
|
|
ret = io_submit_sqes(ctx, to_submit);
|
|
|
|
if (ret != to_submit) {
|
2022-03-22 17:07:58 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-12-18 19:53:45 +03:00
|
|
|
goto out;
|
2022-03-22 17:07:58 +03:00
|
|
|
}
|
|
|
|
if ((flags & IORING_ENTER_GETEVENTS) && ctx->syscall_iopoll)
|
|
|
|
goto iopoll_locked;
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
if (flags & IORING_ENTER_GETEVENTS) {
|
2022-04-21 12:13:42 +03:00
|
|
|
int ret2;
|
2022-03-22 17:07:57 +03:00
|
|
|
if (ctx->syscall_iopoll) {
|
2022-03-22 17:07:58 +03:00
|
|
|
/*
|
|
|
|
* We disallow the app entering submit/complete with
|
|
|
|
* polling, but we still need to lock the ring to
|
|
|
|
* prevent racing with polled issue that got punted to
|
|
|
|
* a workqueue.
|
|
|
|
*/
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
iopoll_locked:
|
2022-04-21 12:13:42 +03:00
|
|
|
ret2 = io_validate_ext_arg(flags, argp, argsz);
|
|
|
|
if (likely(!ret2)) {
|
|
|
|
min_complete = min(min_complete,
|
|
|
|
ctx->cq_entries);
|
|
|
|
ret2 = io_iopoll_check(ctx, min_complete);
|
2022-03-22 17:07:58 +03:00
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-01-09 18:59:42 +03:00
|
|
|
} else {
|
2022-03-22 17:07:56 +03:00
|
|
|
const sigset_t __user *sig;
|
|
|
|
struct __kernel_timespec __user *ts;
|
|
|
|
|
2022-04-21 12:13:42 +03:00
|
|
|
ret2 = io_get_ext_arg(flags, argp, &argsz, &ts, &sig);
|
|
|
|
if (likely(!ret2)) {
|
|
|
|
min_complete = min(min_complete,
|
|
|
|
ctx->cq_entries);
|
|
|
|
ret2 = io_cqring_wait(ctx, min_complete, sig,
|
|
|
|
argsz, ts);
|
|
|
|
}
|
2019-01-09 18:59:42 +03:00
|
|
|
}
|
2020-11-03 05:54:37 +03:00
|
|
|
|
2022-04-21 12:13:44 +03:00
|
|
|
if (!ret) {
|
2022-04-21 12:13:42 +03:00
|
|
|
ret = ret2;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-04-21 12:13:44 +03:00
|
|
|
/*
|
|
|
|
* EBADR indicates that one or more CQE were dropped.
|
|
|
|
* Once the user has been informed we can clear the bit
|
|
|
|
* as they are obviously ok with those drops.
|
|
|
|
*/
|
|
|
|
if (unlikely(ret2 == -EBADR))
|
|
|
|
clear_bit(IO_CHECK_CQ_DROPPED_BIT,
|
|
|
|
&ctx->check_cq);
|
2019-01-09 18:59:42 +03:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2019-12-18 19:53:45 +03:00
|
|
|
out:
|
2019-10-08 02:18:42 +03:00
|
|
|
percpu_ref_put(&ctx->refs);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
out_fput:
|
2022-05-12 03:30:20 +03:00
|
|
|
fdput(f);
|
2022-04-21 12:13:42 +03:00
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations io_uring_fops = {
|
|
|
|
.release = io_uring_release,
|
|
|
|
.mmap = io_uring_mmap,
|
2019-11-28 14:53:22 +03:00
|
|
|
#ifndef CONFIG_MMU
|
|
|
|
.get_unmapped_area = io_uring_nommu_get_unmapped_area,
|
|
|
|
.mmap_capabilities = io_uring_nommu_mmap_capabilities,
|
|
|
|
#endif
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
.poll = io_uring_poll,
|
2020-02-26 20:38:32 +03:00
|
|
|
#ifdef CONFIG_PROC_FS
|
2020-01-30 18:25:34 +03:00
|
|
|
.show_fdinfo = io_uring_show_fdinfo,
|
2020-02-26 20:38:32 +03:00
|
|
|
#endif
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
};
|
|
|
|
|
2022-05-25 20:48:35 +03:00
|
|
|
bool io_is_uring_fops(struct file *file)
|
|
|
|
{
|
|
|
|
return file->f_op == &io_uring_fops;
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_allocate_scq_urings(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_params *p)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
2019-08-26 20:23:46 +03:00
|
|
|
struct io_rings *rings;
|
|
|
|
size_t size, sq_array_offset;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2020-08-05 21:58:23 +03:00
|
|
|
/* make sure these are sane, as we already accounted them */
|
|
|
|
ctx->sq_entries = p->sq_entries;
|
|
|
|
ctx->cq_entries = p->cq_entries;
|
|
|
|
|
2022-04-26 21:21:25 +03:00
|
|
|
size = rings_size(ctx, p->sq_entries, p->cq_entries, &sq_array_offset);
|
2019-08-26 20:23:46 +03:00
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
|
|
|
rings = io_mem_alloc(size);
|
|
|
|
if (!rings)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return -ENOMEM;
|
|
|
|
|
2019-08-26 20:23:46 +03:00
|
|
|
ctx->rings = rings;
|
|
|
|
ctx->sq_array = (u32 *)((char *)rings + sq_array_offset);
|
|
|
|
rings->sq_ring_mask = p->sq_entries - 1;
|
|
|
|
rings->cq_ring_mask = p->cq_entries - 1;
|
|
|
|
rings->sq_ring_entries = p->sq_entries;
|
|
|
|
rings->cq_ring_entries = p->cq_entries;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
2022-04-01 04:27:52 +03:00
|
|
|
if (p->flags & IORING_SETUP_SQE128)
|
|
|
|
size = array_size(2 * sizeof(struct io_uring_sqe), p->sq_entries);
|
|
|
|
else
|
|
|
|
size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
|
2019-11-20 19:26:29 +03:00
|
|
|
if (size == SIZE_MAX) {
|
|
|
|
io_mem_free(ctx->rings);
|
|
|
|
ctx->rings = NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return -EOVERFLOW;
|
2019-11-20 19:26:29 +03:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
ctx->sq_sqes = io_mem_alloc(size);
|
2019-11-20 19:26:29 +03:00
|
|
|
if (!ctx->sq_sqes) {
|
|
|
|
io_mem_free(ctx->rings);
|
|
|
|
ctx->rings = NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return -ENOMEM;
|
2019-11-20 19:26:29 +03:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-12-21 21:34:05 +03:00
|
|
|
static int io_uring_install_fd(struct io_ring_ctx *ctx, struct file *file)
|
|
|
|
{
|
|
|
|
int ret, fd;
|
|
|
|
|
|
|
|
fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
|
|
|
|
if (fd < 0)
|
|
|
|
return fd;
|
|
|
|
|
2021-06-14 04:36:15 +03:00
|
|
|
ret = io_uring_add_tctx_node(ctx);
|
2020-12-21 21:34:05 +03:00
|
|
|
if (ret) {
|
|
|
|
put_unused_fd(fd);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
fd_install(fd, file);
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
/*
|
|
|
|
* Allocate an anonymous fd, this is what constitutes the application
|
|
|
|
* visible backing of an io_uring instance. The application mmaps this
|
|
|
|
* fd to gain access to the SQ/CQ ring details. If UNIX sockets are enabled,
|
|
|
|
* we have to tie this fd to a socket for file garbage collection purposes.
|
|
|
|
*/
|
2020-12-21 21:34:05 +03:00
|
|
|
static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
struct file *file;
|
2020-12-21 21:34:05 +03:00
|
|
|
#if defined(CONFIG_UNIX)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = sock_create_kern(&init_net, PF_UNIX, SOCK_RAW, IPPROTO_IP,
|
|
|
|
&ctx->ring_sock);
|
|
|
|
if (ret)
|
2020-12-21 21:34:05 +03:00
|
|
|
return ERR_PTR(ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#endif
|
|
|
|
|
2021-02-02 03:33:52 +03:00
|
|
|
file = anon_inode_getfile_secure("[io_uring]", &io_uring_fops, ctx,
|
|
|
|
O_RDWR | O_CLOEXEC, NULL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#if defined(CONFIG_UNIX)
|
2020-12-21 21:34:05 +03:00
|
|
|
if (IS_ERR(file)) {
|
|
|
|
sock_release(ctx->ring_sock);
|
|
|
|
ctx->ring_sock = NULL;
|
|
|
|
} else {
|
|
|
|
ctx->ring_sock->file = file;
|
2020-09-13 22:09:39 +03:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
#endif
|
2020-12-21 21:34:05 +03:00
|
|
|
return file;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
|
|
|
|
struct io_uring_params __user *params)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2020-12-21 21:34:05 +03:00
|
|
|
struct file *file;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
int ret;
|
|
|
|
|
2019-12-29 01:39:54 +03:00
|
|
|
if (!entries)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return -EINVAL;
|
2019-12-29 01:39:54 +03:00
|
|
|
if (entries > IORING_MAX_ENTRIES) {
|
|
|
|
if (!(p->flags & IORING_SETUP_CLAMP))
|
|
|
|
return -EINVAL;
|
|
|
|
entries = IORING_MAX_ENTRIES;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Use twice as many entries for the CQ ring. It's possible for the
|
|
|
|
* application to drive a higher depth than the size of the SQ ring,
|
|
|
|
* since the sqes are only used at submission time. This allows for
|
2019-10-04 21:10:03 +03:00
|
|
|
* some flexibility in overcommitting a bit. If the application has
|
|
|
|
* set IORING_SETUP_CQSIZE, it will have passed in the desired number
|
|
|
|
* of CQ ring entries manually.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
*/
|
|
|
|
p->sq_entries = roundup_pow_of_two(entries);
|
2019-10-04 21:10:03 +03:00
|
|
|
if (p->flags & IORING_SETUP_CQSIZE) {
|
|
|
|
/*
|
|
|
|
* If IORING_SETUP_CQSIZE is set, we do the same roundup
|
|
|
|
* to a power-of-two, if it isn't already. We do NOT impose
|
|
|
|
* any cq vs sq ring sizing.
|
|
|
|
*/
|
2020-11-24 10:03:03 +03:00
|
|
|
if (!p->cq_entries)
|
2019-10-04 21:10:03 +03:00
|
|
|
return -EINVAL;
|
2019-12-29 01:39:54 +03:00
|
|
|
if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {
|
|
|
|
if (!(p->flags & IORING_SETUP_CLAMP))
|
|
|
|
return -EINVAL;
|
|
|
|
p->cq_entries = IORING_MAX_CQ_ENTRIES;
|
|
|
|
}
|
2020-11-24 10:03:03 +03:00
|
|
|
p->cq_entries = roundup_pow_of_two(p->cq_entries);
|
|
|
|
if (p->cq_entries < p->sq_entries)
|
|
|
|
return -EINVAL;
|
2019-10-04 21:10:03 +03:00
|
|
|
} else {
|
|
|
|
p->cq_entries = 2 * p->sq_entries;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
ctx = io_ring_ctx_alloc(p);
|
2021-02-22 02:19:37 +03:00
|
|
|
if (!ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return -ENOMEM;
|
2022-03-22 17:07:57 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, user
|
|
|
|
* space applications don't need to do io completion events
|
|
|
|
* polling again, they can rely on io_sq_thread to do polling
|
|
|
|
* work, which can reduce cpu usage and uring_lock contention.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SQPOLL))
|
|
|
|
ctx->syscall_iopoll = 1;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
ctx->compat = in_compat_syscall();
|
2021-02-22 02:19:37 +03:00
|
|
|
if (!capable(CAP_IPC_LOCK))
|
|
|
|
ctx->user = get_uid(current_user());
|
2020-09-14 19:45:53 +03:00
|
|
|
|
2022-04-26 04:49:02 +03:00
|
|
|
/*
|
2022-04-26 04:49:03 +03:00
|
|
|
* For SQPOLL, we just need a wakeup, always. For !SQPOLL, if
|
|
|
|
* COOP_TASKRUN is set, then IPIs are never needed by the app.
|
2022-04-26 04:49:02 +03:00
|
|
|
*/
|
2022-04-26 04:49:03 +03:00
|
|
|
ret = -EINVAL;
|
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
|
|
|
/* IPI related flags don't make sense with SQPOLL */
|
2022-04-26 04:49:04 +03:00
|
|
|
if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
|
|
|
|
IORING_SETUP_TASKRUN_FLAG))
|
2022-04-26 04:49:03 +03:00
|
|
|
goto err;
|
2022-04-26 04:49:02 +03:00
|
|
|
ctx->notify_method = TWA_SIGNAL_NO_IPI;
|
2022-04-26 04:49:03 +03:00
|
|
|
} else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
|
|
|
|
ctx->notify_method = TWA_SIGNAL_NO_IPI;
|
|
|
|
} else {
|
2022-04-26 04:49:04 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
goto err;
|
2022-04-26 04:49:02 +03:00
|
|
|
ctx->notify_method = TWA_SIGNAL;
|
2022-04-26 04:49:03 +03:00
|
|
|
}
|
2022-04-26 04:49:02 +03:00
|
|
|
|
2020-09-14 19:45:53 +03:00
|
|
|
/*
|
|
|
|
* This is just grabbed for accounting purposes. When a process exits,
|
|
|
|
* the mm is exited and dropped before the files, hence we need to hang
|
|
|
|
* on to this mm purely for the purposes of being able to unaccount
|
|
|
|
* memory (locked/pinned vm). It's not used for anything else.
|
|
|
|
*/
|
2020-08-25 16:58:00 +03:00
|
|
|
mmgrab(current->mm);
|
2020-09-14 19:45:53 +03:00
|
|
|
ctx->mm_account = current->mm;
|
2020-08-25 16:58:00 +03:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
ret = io_allocate_scq_urings(ctx, p);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
2020-08-27 17:58:31 +03:00
|
|
|
ret = io_sq_offload_create(ctx, p);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
2021-04-25 16:32:24 +03:00
|
|
|
/* always set a rsrc node */
|
2021-04-29 13:46:48 +03:00
|
|
|
ret = io_rsrc_node_switch_start(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
2021-04-25 16:32:24 +03:00
|
|
|
io_rsrc_node_switch(ctx, NULL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
memset(&p->sq_off, 0, sizeof(p->sq_off));
|
2019-08-26 20:23:46 +03:00
|
|
|
p->sq_off.head = offsetof(struct io_rings, sq.head);
|
|
|
|
p->sq_off.tail = offsetof(struct io_rings, sq.tail);
|
|
|
|
p->sq_off.ring_mask = offsetof(struct io_rings, sq_ring_mask);
|
|
|
|
p->sq_off.ring_entries = offsetof(struct io_rings, sq_ring_entries);
|
|
|
|
p->sq_off.flags = offsetof(struct io_rings, sq_flags);
|
|
|
|
p->sq_off.dropped = offsetof(struct io_rings, sq_dropped);
|
|
|
|
p->sq_off.array = (char *)ctx->sq_array - (char *)ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
|
|
|
|
memset(&p->cq_off, 0, sizeof(p->cq_off));
|
2019-08-26 20:23:46 +03:00
|
|
|
p->cq_off.head = offsetof(struct io_rings, cq.head);
|
|
|
|
p->cq_off.tail = offsetof(struct io_rings, cq.tail);
|
|
|
|
p->cq_off.ring_mask = offsetof(struct io_rings, cq_ring_mask);
|
|
|
|
p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries);
|
|
|
|
p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);
|
|
|
|
p->cq_off.cqes = offsetof(struct io_rings, cqes);
|
2020-05-15 19:38:04 +03:00
|
|
|
p->cq_off.flags = offsetof(struct io_rings, cq_flags);
|
2019-09-06 19:26:21 +03:00
|
|
|
|
2020-05-05 11:28:53 +03:00
|
|
|
p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
|
|
|
|
IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
|
2020-06-17 12:53:55 +03:00
|
|
|
IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
|
2020-11-03 05:54:37 +03:00
|
|
|
IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
|
2021-06-10 18:37:38 +03:00
|
|
|
IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
|
2022-04-11 00:13:24 +03:00
|
|
|
IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
|
|
|
|
IORING_FEAT_LINKED_FILE;
|
2020-05-05 11:28:53 +03:00
|
|
|
|
|
|
|
if (copy_to_user(params, p, sizeof(*p))) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
goto err;
|
|
|
|
}
|
2020-07-30 22:43:53 +03:00
|
|
|
|
2020-12-21 21:34:05 +03:00
|
|
|
file = io_uring_get_file(ctx);
|
|
|
|
if (IS_ERR(file)) {
|
|
|
|
ret = PTR_ERR(file);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2019-10-28 18:15:33 +03:00
|
|
|
/*
|
|
|
|
* Install ring fd as the very last thing, so we don't risk someone
|
|
|
|
* having closed it before we finish setup
|
|
|
|
*/
|
2020-12-21 21:34:05 +03:00
|
|
|
ret = io_uring_install_fd(ctx, file);
|
|
|
|
if (ret < 0) {
|
|
|
|
/* fput will clean it up */
|
|
|
|
fput(file);
|
|
|
|
return ret;
|
|
|
|
}
|
2019-10-28 18:15:33 +03:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
|
|
|
trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return ret;
|
|
|
|
err:
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sets up an aio uring context, and returns the fd. Applications asks for a
|
|
|
|
* ring size, we return the actual sq/cq ring sizes (among other things) in the
|
|
|
|
* params structure passed in.
|
|
|
|
*/
|
|
|
|
static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
|
|
|
|
{
|
|
|
|
struct io_uring_params p;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (copy_from_user(&p, params, sizeof(p)))
|
|
|
|
return -EFAULT;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
|
|
|
|
if (p.resv[i])
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
|
|
|
if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
|
2019-12-29 01:39:54 +03:00
|
|
|
IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
|
2020-08-27 17:58:31 +03:00
|
|
|
IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
|
2022-04-26 04:49:03 +03:00
|
|
|
IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL |
|
2022-04-01 04:27:52 +03:00
|
|
|
IORING_SETUP_COOP_TASKRUN | IORING_SETUP_TASKRUN_FLAG |
|
2022-04-26 21:21:33 +03:00
|
|
|
IORING_SETUP_SQE128 | IORING_SETUP_CQE32))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
2022-04-26 04:49:04 +03:00
|
|
|
return io_uring_create(entries, &p, params);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
SYSCALL_DEFINE2(io_uring_setup, u32, entries,
|
|
|
|
struct io_uring_params __user *, params)
|
|
|
|
{
|
|
|
|
return io_uring_setup(entries, params);
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_probe(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned nr_args)
|
2020-01-17 01:36:52 +03:00
|
|
|
{
|
|
|
|
struct io_uring_probe *p;
|
|
|
|
size_t size;
|
|
|
|
int i, ret;
|
|
|
|
|
|
|
|
size = struct_size(p, ops, nr_args);
|
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
p = kzalloc(size, GFP_KERNEL);
|
|
|
|
if (!p)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = -EFAULT;
|
|
|
|
if (copy_from_user(p, arg, size))
|
|
|
|
goto out;
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (memchr_inv(p, 0, size))
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
p->last_op = IORING_OP_LAST - 1;
|
|
|
|
if (nr_args > IORING_OP_LAST)
|
|
|
|
nr_args = IORING_OP_LAST;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_args; i++) {
|
|
|
|
p->ops[i].op = i;
|
|
|
|
if (!io_op_defs[i].not_supported)
|
|
|
|
p->ops[i].flags = IO_URING_OP_SUPPORTED;
|
|
|
|
}
|
|
|
|
p->ops_len = i;
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
if (copy_to_user(arg, p, size))
|
|
|
|
ret = -EFAULT;
|
|
|
|
out:
|
|
|
|
kfree(p);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-01-28 20:04:42 +03:00
|
|
|
static int io_register_personality(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2021-02-15 23:40:22 +03:00
|
|
|
const struct cred *creds;
|
2021-03-08 17:16:16 +03:00
|
|
|
u32 id;
|
2020-10-15 17:46:24 +03:00
|
|
|
int ret;
|
2020-01-28 20:04:42 +03:00
|
|
|
|
2021-02-15 23:40:22 +03:00
|
|
|
creds = get_current_cred();
|
2020-10-15 17:46:24 +03:00
|
|
|
|
2021-03-08 17:16:16 +03:00
|
|
|
ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)creds,
|
|
|
|
XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL);
|
2021-08-20 23:53:59 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
put_cred(creds);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
return id;
|
2020-01-28 20:04:42 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_register_restrictions(struct io_ring_ctx *ctx,
|
|
|
|
void __user *arg, unsigned int nr_args)
|
2020-08-27 17:58:30 +03:00
|
|
|
{
|
|
|
|
struct io_uring_restriction *res;
|
|
|
|
size_t size;
|
|
|
|
int i, ret;
|
|
|
|
|
2020-08-27 17:58:31 +03:00
|
|
|
/* Restrictions allowed only if rings started disabled */
|
|
|
|
if (!(ctx->flags & IORING_SETUP_R_DISABLED))
|
|
|
|
return -EBADFD;
|
|
|
|
|
2020-08-27 17:58:30 +03:00
|
|
|
/* We allow only a single restrictions registration */
|
2020-08-27 17:58:31 +03:00
|
|
|
if (ctx->restrictions.registered)
|
2020-08-27 17:58:30 +03:00
|
|
|
return -EBUSY;
|
|
|
|
|
|
|
|
if (!arg || nr_args > IORING_MAX_RESTRICTIONS)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
size = array_size(nr_args, sizeof(*res));
|
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
|
|
|
res = memdup_user(arg, size);
|
|
|
|
if (IS_ERR(res))
|
|
|
|
return PTR_ERR(res);
|
|
|
|
|
|
|
|
ret = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < nr_args; i++) {
|
|
|
|
switch (res[i].opcode) {
|
|
|
|
case IORING_RESTRICTION_REGISTER_OP:
|
|
|
|
if (res[i].register_op >= IORING_REGISTER_LAST) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
__set_bit(res[i].register_op,
|
|
|
|
ctx->restrictions.register_op);
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_OP:
|
|
|
|
if (res[i].sqe_op >= IORING_OP_LAST) {
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
__set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_FLAGS_ALLOWED:
|
|
|
|
ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags;
|
|
|
|
break;
|
|
|
|
case IORING_RESTRICTION_SQE_FLAGS_REQUIRED:
|
|
|
|
ctx->restrictions.sqe_flags_required = res[i].sqe_flags;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
/* Reset all restrictions if an error happened */
|
|
|
|
if (ret != 0)
|
|
|
|
memset(&ctx->restrictions, 0, sizeof(ctx->restrictions));
|
|
|
|
else
|
2020-08-27 17:58:31 +03:00
|
|
|
ctx->restrictions.registered = true;
|
2020-08-27 17:58:30 +03:00
|
|
|
|
|
|
|
kfree(res);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-08-27 17:58:31 +03:00
|
|
|
static int io_register_enable_rings(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
if (!(ctx->flags & IORING_SETUP_R_DISABLED))
|
|
|
|
return -EBADFD;
|
|
|
|
|
|
|
|
if (ctx->restrictions.registered)
|
|
|
|
ctx->restricted = 1;
|
|
|
|
|
2021-03-08 16:20:57 +03:00
|
|
|
ctx->flags &= ~IORING_SETUP_R_DISABLED;
|
|
|
|
if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
|
|
|
|
wake_up(&ctx->sq_data->wait);
|
2020-08-27 17:58:31 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-04-25 16:32:20 +03:00
|
|
|
static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
|
2021-04-25 16:32:22 +03:00
|
|
|
struct io_uring_rsrc_update2 *up,
|
2021-04-25 16:32:19 +03:00
|
|
|
unsigned nr_args)
|
|
|
|
{
|
|
|
|
__u32 tmp;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
if (check_add_overflow(up->offset, nr_args, &tmp))
|
|
|
|
return -EOVERFLOW;
|
|
|
|
err = io_rsrc_node_switch_start(ctx);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
2021-04-25 16:32:20 +03:00
|
|
|
switch (type) {
|
|
|
|
case IORING_RSRC_FILE:
|
2021-04-25 16:32:19 +03:00
|
|
|
return __io_sqe_files_update(ctx, up, nr_args);
|
2021-04-25 16:32:26 +03:00
|
|
|
case IORING_RSRC_BUFFER:
|
|
|
|
return __io_sqe_buffers_update(ctx, up, nr_args);
|
2021-04-25 16:32:19 +03:00
|
|
|
}
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2021-04-25 16:32:22 +03:00
|
|
|
static int io_register_files_update(struct io_ring_ctx *ctx, void __user *arg,
|
|
|
|
unsigned nr_args)
|
2021-04-25 16:32:19 +03:00
|
|
|
{
|
2021-04-25 16:32:22 +03:00
|
|
|
struct io_uring_rsrc_update2 up;
|
2021-04-25 16:32:19 +03:00
|
|
|
|
|
|
|
if (!nr_args)
|
|
|
|
return -EINVAL;
|
2021-04-25 16:32:22 +03:00
|
|
|
memset(&up, 0, sizeof(up));
|
|
|
|
if (copy_from_user(&up, arg, sizeof(struct io_uring_rsrc_update)))
|
|
|
|
return -EFAULT;
|
2022-04-12 19:30:40 +03:00
|
|
|
if (up.resv || up.resv2)
|
2022-04-12 19:30:39 +03:00
|
|
|
return -EINVAL;
|
2021-04-25 16:32:22 +03:00
|
|
|
return __io_register_rsrc_update(ctx, IORING_RSRC_FILE, &up, nr_args);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 18:37:37 +03:00
|
|
|
unsigned size, unsigned type)
|
2021-04-25 16:32:22 +03:00
|
|
|
{
|
|
|
|
struct io_uring_rsrc_update2 up;
|
|
|
|
|
|
|
|
if (size != sizeof(up))
|
|
|
|
return -EINVAL;
|
2021-04-25 16:32:19 +03:00
|
|
|
if (copy_from_user(&up, arg, sizeof(up)))
|
|
|
|
return -EFAULT;
|
2022-04-12 19:30:40 +03:00
|
|
|
if (!up.nr || up.resv || up.resv2)
|
2021-04-25 16:32:19 +03:00
|
|
|
return -EINVAL;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 18:37:37 +03:00
|
|
|
return __io_register_rsrc_update(ctx, type, &up, up.nr);
|
2021-04-25 16:32:19 +03:00
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg,
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 18:37:37 +03:00
|
|
|
unsigned int size, unsigned int type)
|
2021-04-25 16:32:21 +03:00
|
|
|
{
|
|
|
|
struct io_uring_rsrc_register rr;
|
|
|
|
|
|
|
|
/* keep it extendible */
|
|
|
|
if (size != sizeof(rr))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
memset(&rr, 0, sizeof(rr));
|
|
|
|
if (copy_from_user(&rr, arg, size))
|
|
|
|
return -EFAULT;
|
2022-05-09 18:29:14 +03:00
|
|
|
if (!rr.nr || rr.resv2)
|
|
|
|
return -EINVAL;
|
|
|
|
if (rr.flags & ~IORING_RSRC_REGISTER_SPARSE)
|
2021-04-25 16:32:21 +03:00
|
|
|
return -EINVAL;
|
|
|
|
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 18:37:37 +03:00
|
|
|
switch (type) {
|
2021-04-25 16:32:21 +03:00
|
|
|
case IORING_RSRC_FILE:
|
2022-05-09 18:29:14 +03:00
|
|
|
if (rr.flags & IORING_RSRC_REGISTER_SPARSE && rr.data)
|
|
|
|
break;
|
2021-04-25 16:32:21 +03:00
|
|
|
return io_sqe_files_register(ctx, u64_to_user_ptr(rr.data),
|
|
|
|
rr.nr, u64_to_user_ptr(rr.tags));
|
2021-04-25 16:32:26 +03:00
|
|
|
case IORING_RSRC_BUFFER:
|
2022-05-18 21:13:49 +03:00
|
|
|
if (rr.flags & IORING_RSRC_REGISTER_SPARSE && rr.data)
|
2022-05-09 18:29:14 +03:00
|
|
|
break;
|
2021-04-25 16:32:26 +03:00
|
|
|
return io_sqe_buffers_register(ctx, u64_to_user_ptr(rr.data),
|
|
|
|
rr.nr, u64_to_user_ptr(rr.tags));
|
2021-04-25 16:32:21 +03:00
|
|
|
}
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_register_iowq_aff(struct io_ring_ctx *ctx,
|
|
|
|
void __user *arg, unsigned len)
|
2021-06-17 19:19:54 +03:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
cpumask_var_t new_mask;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!tctx || !tctx->io_wq)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
cpumask_clear(new_mask);
|
|
|
|
if (len > cpumask_size())
|
|
|
|
len = cpumask_size();
|
|
|
|
|
2022-04-06 14:55:33 +03:00
|
|
|
if (in_compat_syscall()) {
|
|
|
|
ret = compat_get_bitmap(cpumask_bits(new_mask),
|
|
|
|
(const compat_ulong_t __user *)arg,
|
|
|
|
len * 8 /* CHAR_BIT */);
|
|
|
|
} else {
|
|
|
|
ret = copy_from_user(new_mask, arg, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret) {
|
2021-06-17 19:19:54 +03:00
|
|
|
free_cpumask_var(new_mask);
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = io_wq_cpu_affinity(tctx->io_wq, new_mask);
|
|
|
|
free_cpumask_var(new_mask);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_unregister_iowq_aff(struct io_ring_ctx *ctx)
|
2021-06-17 19:19:54 +03:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
|
|
|
if (!tctx || !tctx->io_wq)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return io_wq_cpu_affinity(tctx->io_wq, NULL);
|
|
|
|
}
|
|
|
|
|
2021-10-04 22:02:54 +03:00
|
|
|
static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
|
|
|
|
void __user *arg)
|
2021-10-21 15:20:29 +03:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-08-27 20:33:19 +03:00
|
|
|
{
|
2021-10-21 15:20:29 +03:00
|
|
|
struct io_tctx_node *node;
|
2021-09-01 23:15:59 +03:00
|
|
|
struct io_uring_task *tctx = NULL;
|
|
|
|
struct io_sq_data *sqd = NULL;
|
2021-08-27 20:33:19 +03:00
|
|
|
__u32 new_count[2];
|
|
|
|
int i, ret;
|
|
|
|
|
|
|
|
if (copy_from_user(new_count, arg, sizeof(new_count)))
|
|
|
|
return -EFAULT;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(new_count); i++)
|
|
|
|
if (new_count[i] > INT_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2021-09-01 23:15:59 +03:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
|
|
|
sqd = ctx->sq_data;
|
|
|
|
if (sqd) {
|
2021-09-09 04:07:26 +03:00
|
|
|
/*
|
|
|
|
* Observe the correct sqd->lock -> ctx->uring_lock
|
|
|
|
* ordering. Fine to drop uring_lock here, we hold
|
|
|
|
* a ref to the ctx.
|
|
|
|
*/
|
2021-09-13 22:08:51 +03:00
|
|
|
refcount_inc(&sqd->refs);
|
2021-09-09 04:07:26 +03:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-09-01 23:15:59 +03:00
|
|
|
mutex_lock(&sqd->lock);
|
2021-09-09 04:07:26 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-09-13 22:08:51 +03:00
|
|
|
if (sqd->thread)
|
|
|
|
tctx = sqd->thread->io_uring;
|
2021-09-01 23:15:59 +03:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
tctx = current->io_uring;
|
|
|
|
}
|
|
|
|
|
2021-10-20 01:43:46 +03:00
|
|
|
BUILD_BUG_ON(sizeof(new_count) != sizeof(ctx->iowq_limits));
|
2021-09-01 23:15:59 +03:00
|
|
|
|
2021-11-08 18:10:03 +03:00
|
|
|
for (i = 0; i < ARRAY_SIZE(new_count); i++)
|
|
|
|
if (new_count[i])
|
|
|
|
ctx->iowq_limits[i] = new_count[i];
|
2021-10-20 01:43:46 +03:00
|
|
|
ctx->iowq_limits_set = true;
|
|
|
|
|
|
|
|
if (tctx && tctx->io_wq) {
|
|
|
|
ret = io_wq_max_workers(tctx->io_wq, new_count);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
} else {
|
|
|
|
memset(new_count, 0, sizeof(new_count));
|
|
|
|
}
|
2021-09-01 23:15:59 +03:00
|
|
|
|
2021-09-13 22:08:51 +03:00
|
|
|
if (sqd) {
|
2021-09-01 23:15:59 +03:00
|
|
|
mutex_unlock(&sqd->lock);
|
2021-09-13 22:08:51 +03:00
|
|
|
io_put_sq_data(sqd);
|
|
|
|
}
|
2021-08-27 20:33:19 +03:00
|
|
|
|
|
|
|
if (copy_to_user(arg, new_count, sizeof(new_count)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2021-10-21 15:20:29 +03:00
|
|
|
/* that's it for SQPOLL, only the SQPOLL task creates requests */
|
|
|
|
if (sqd)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* now propagate the restriction to all registered users */
|
|
|
|
list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
|
|
|
|
struct io_uring_task *tctx = node->task->io_uring;
|
|
|
|
|
|
|
|
if (WARN_ON_ONCE(!tctx->io_wq))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(new_count); i++)
|
|
|
|
new_count[i] = ctx->iowq_limits[i];
|
|
|
|
/* ignore errors, it always returns zero anyway */
|
|
|
|
(void)io_wq_max_workers(tctx->io_wq, new_count);
|
|
|
|
}
|
2021-08-27 20:33:19 +03:00
|
|
|
return 0;
|
2021-09-01 23:15:59 +03:00
|
|
|
err:
|
2021-09-13 22:08:51 +03:00
|
|
|
if (sqd) {
|
2021-09-01 23:15:59 +03:00
|
|
|
mutex_unlock(&sqd->lock);
|
2021-09-13 22:08:51 +03:00
|
|
|
io_put_sq_data(sqd);
|
|
|
|
}
|
2021-09-01 23:15:59 +03:00
|
|
|
return ret;
|
2021-08-27 20:33:19 +03:00
|
|
|
}
|
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
static int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
|
|
|
|
{
|
|
|
|
struct io_uring_buf_ring *br;
|
|
|
|
struct io_uring_buf_reg reg;
|
2022-07-21 14:01:15 +03:00
|
|
|
struct io_buffer_list *bl, *free_bl = NULL;
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
struct page **pages;
|
|
|
|
int nr_pages;
|
|
|
|
|
|
|
|
if (copy_from_user(®, arg, sizeof(reg)))
|
|
|
|
return -EFAULT;
|
|
|
|
|
|
|
|
if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2])
|
|
|
|
return -EINVAL;
|
|
|
|
if (!reg.ring_addr)
|
|
|
|
return -EFAULT;
|
|
|
|
if (reg.ring_addr & ~PAGE_MASK)
|
|
|
|
return -EINVAL;
|
|
|
|
if (!is_power_of_2(reg.ring_entries))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2022-06-13 13:11:57 +03:00
|
|
|
/* cannot disambiguate full vs empty due to head/tail size */
|
|
|
|
if (reg.ring_entries >= 65536)
|
|
|
|
return -EINVAL;
|
|
|
|
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (unlikely(reg.bgid < BGID_ARRAY && !ctx->io_bl)) {
|
|
|
|
int ret = io_init_bl_list(ctx);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
bl = io_buffer_get_list(ctx, reg.bgid);
|
2022-05-19 00:34:37 +03:00
|
|
|
if (bl) {
|
|
|
|
/* if mapped buffer ring OR classic exists, don't allow */
|
|
|
|
if (bl->buf_nr_pages || !list_empty(&bl->buf_list))
|
|
|
|
return -EEXIST;
|
|
|
|
} else {
|
2022-07-21 14:01:15 +03:00
|
|
|
free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL);
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
if (!bl)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
pages = io_pin_pages(reg.ring_addr,
|
|
|
|
struct_size(br, bufs, reg.ring_entries),
|
|
|
|
&nr_pages);
|
|
|
|
if (IS_ERR(pages)) {
|
2022-07-21 14:01:15 +03:00
|
|
|
kfree(free_bl);
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
return PTR_ERR(pages);
|
|
|
|
}
|
|
|
|
|
|
|
|
br = page_address(pages[0]);
|
|
|
|
bl->buf_pages = pages;
|
|
|
|
bl->buf_nr_pages = nr_pages;
|
|
|
|
bl->nr_entries = reg.ring_entries;
|
|
|
|
bl->buf_ring = br;
|
|
|
|
bl->mask = reg.ring_entries - 1;
|
|
|
|
io_buffer_add_list(ctx, bl, reg.bgid);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
|
|
|
|
{
|
|
|
|
struct io_uring_buf_reg reg;
|
|
|
|
struct io_buffer_list *bl;
|
|
|
|
|
|
|
|
if (copy_from_user(®, arg, sizeof(reg)))
|
|
|
|
return -EFAULT;
|
|
|
|
if (reg.pad || reg.resv[0] || reg.resv[1] || reg.resv[2])
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
bl = io_buffer_get_list(ctx, reg.bgid);
|
|
|
|
if (!bl)
|
|
|
|
return -ENOENT;
|
|
|
|
if (!bl->buf_nr_pages)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
__io_remove_buffers(ctx, bl, -1U);
|
|
|
|
if (bl->bgid >= BGID_ARRAY) {
|
|
|
|
xa_erase(&ctx->io_bl_xa, bl->bgid);
|
|
|
|
kfree(bl);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
|
|
|
|
void __user *arg, unsigned nr_args)
|
2019-04-15 19:49:38 +03:00
|
|
|
__releases(ctx->uring_lock)
|
|
|
|
__acquires(ctx->uring_lock)
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2019-04-22 19:23:23 +03:00
|
|
|
/*
|
|
|
|
* We're inside the ring mutex, if the ref is already dying, then
|
|
|
|
* someone else killed the ctx or is already going through
|
|
|
|
* io_uring_register().
|
|
|
|
*/
|
|
|
|
if (percpu_ref_is_dying(&ctx->refs))
|
|
|
|
return -ENXIO;
|
|
|
|
|
2021-04-15 15:07:40 +03:00
|
|
|
if (ctx->restricted) {
|
|
|
|
if (opcode >= IORING_REGISTER_LAST)
|
|
|
|
return -EINVAL;
|
|
|
|
opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
|
|
|
|
if (!test_bit(opcode, ctx->restrictions.register_op))
|
|
|
|
return -EACCES;
|
|
|
|
}
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
switch (opcode) {
|
|
|
|
case IORING_REGISTER_BUFFERS:
|
2022-05-18 21:13:49 +03:00
|
|
|
ret = -EFAULT;
|
|
|
|
if (!arg)
|
|
|
|
break;
|
2021-04-25 16:32:26 +03:00
|
|
|
ret = io_sqe_buffers_register(ctx, arg, nr_args, NULL);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_BUFFERS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
2021-01-06 23:39:10 +03:00
|
|
|
ret = io_sqe_buffers_unregister(ctx);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
break;
|
2019-01-11 08:13:58 +03:00
|
|
|
case IORING_REGISTER_FILES:
|
2022-05-09 18:29:14 +03:00
|
|
|
ret = -EFAULT;
|
|
|
|
if (!arg)
|
|
|
|
break;
|
2021-04-25 16:32:21 +03:00
|
|
|
ret = io_sqe_files_register(ctx, arg, nr_args, NULL);
|
2019-01-11 08:13:58 +03:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_FILES:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_sqe_files_unregister(ctx);
|
|
|
|
break;
|
2019-10-03 22:59:56 +03:00
|
|
|
case IORING_REGISTER_FILES_UPDATE:
|
2021-04-25 16:32:22 +03:00
|
|
|
ret = io_register_files_update(ctx, arg, nr_args);
|
2019-10-03 22:59:56 +03:00
|
|
|
break;
|
2019-04-11 20:45:41 +03:00
|
|
|
case IORING_REGISTER_EVENTFD:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (nr_args != 1)
|
|
|
|
break;
|
2022-02-04 17:51:15 +03:00
|
|
|
ret = io_eventfd_register(ctx, arg, 0);
|
|
|
|
break;
|
|
|
|
case IORING_REGISTER_EVENTFD_ASYNC:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (nr_args != 1)
|
2020-01-08 21:04:00 +03:00
|
|
|
break;
|
2022-02-04 17:51:15 +03:00
|
|
|
ret = io_eventfd_register(ctx, arg, 1);
|
2019-04-11 20:45:41 +03:00
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_EVENTFD:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_eventfd_unregister(ctx);
|
|
|
|
break;
|
2020-01-17 01:36:52 +03:00
|
|
|
case IORING_REGISTER_PROBE:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args > 256)
|
|
|
|
break;
|
|
|
|
ret = io_probe(ctx, arg, nr_args);
|
|
|
|
break;
|
2020-01-28 20:04:42 +03:00
|
|
|
case IORING_REGISTER_PERSONALITY:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_personality(ctx);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_PERSONALITY:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg)
|
|
|
|
break;
|
|
|
|
ret = io_unregister_personality(ctx, nr_args);
|
|
|
|
break;
|
2020-08-27 17:58:31 +03:00
|
|
|
case IORING_REGISTER_ENABLE_RINGS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_enable_rings(ctx);
|
|
|
|
break;
|
2020-08-27 17:58:30 +03:00
|
|
|
case IORING_REGISTER_RESTRICTIONS:
|
|
|
|
ret = io_register_restrictions(ctx, arg, nr_args);
|
|
|
|
break;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 18:37:37 +03:00
|
|
|
case IORING_REGISTER_FILES2:
|
|
|
|
ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_FILE);
|
|
|
|
break;
|
|
|
|
case IORING_REGISTER_FILES_UPDATE2:
|
|
|
|
ret = io_register_rsrc_update(ctx, arg, nr_args,
|
|
|
|
IORING_RSRC_FILE);
|
|
|
|
break;
|
|
|
|
case IORING_REGISTER_BUFFERS2:
|
|
|
|
ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_BUFFER);
|
2021-04-25 16:32:21 +03:00
|
|
|
break;
|
io_uring: change registration/upd/rsrc tagging ABI
There are ABI moments about recently added rsrc registration/update and
tagging that might become a nuisance in the future. First,
IORING_REGISTER_RSRC[_UPD] hide different types of resources under it,
so breaks fine control over them by restrictions. It works for now, but
once those are wanted under restrictions it would require a rework.
It was also inconvenient trying to fit a new resource not supporting
all the features (e.g. dynamic update) into the interface, so better
to return to IORING_REGISTER_* top level dispatching.
Second, register/update were considered to accept a type of resource,
however that's not a good idea because there might be several ways of
registration of a single resource type, e.g. we may want to add
non-contig buffers or anything more exquisite as dma mapped memory.
So, remove IORING_RSRC_[FILE,BUFFER] out of the ABI, and place them
internally for now to limit changes.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9b554897a7c17ad6e3becc48dfed2f7af9f423d5.1623339162.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-10 18:37:37 +03:00
|
|
|
case IORING_REGISTER_BUFFERS_UPDATE:
|
|
|
|
ret = io_register_rsrc_update(ctx, arg, nr_args,
|
|
|
|
IORING_RSRC_BUFFER);
|
2021-04-25 16:32:22 +03:00
|
|
|
break;
|
2021-06-17 19:19:54 +03:00
|
|
|
case IORING_REGISTER_IOWQ_AFF:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || !nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_register_iowq_aff(ctx, arg, nr_args);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_IOWQ_AFF:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (arg || nr_args)
|
|
|
|
break;
|
|
|
|
ret = io_unregister_iowq_aff(ctx);
|
|
|
|
break;
|
2021-08-27 20:33:19 +03:00
|
|
|
case IORING_REGISTER_IOWQ_MAX_WORKERS:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args != 2)
|
|
|
|
break;
|
|
|
|
ret = io_register_iowq_max_workers(ctx, arg);
|
|
|
|
break;
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 18:22:22 +03:00
|
|
|
case IORING_REGISTER_RING_FDS:
|
|
|
|
ret = io_ringfd_register(ctx, arg, nr_args);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_RING_FDS:
|
|
|
|
ret = io_ringfd_unregister(ctx, arg, nr_args);
|
|
|
|
break;
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
case IORING_REGISTER_PBUF_RING:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args != 1)
|
|
|
|
break;
|
|
|
|
ret = io_register_pbuf_ring(ctx, arg);
|
|
|
|
break;
|
|
|
|
case IORING_UNREGISTER_PBUF_RING:
|
|
|
|
ret = -EINVAL;
|
|
|
|
if (!arg || nr_args != 1)
|
|
|
|
break;
|
|
|
|
ret = io_unregister_pbuf_ring(ctx, arg);
|
|
|
|
break;
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
default:
|
|
|
|
ret = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
|
|
|
|
void __user *, arg, unsigned int, nr_args)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
|
|
|
long ret = -EBADF;
|
|
|
|
struct fd f;
|
|
|
|
|
|
|
|
f = fdget(fd);
|
|
|
|
if (!f.file)
|
|
|
|
return -EBADF;
|
|
|
|
|
|
|
|
ret = -EOPNOTSUPP;
|
2022-05-25 19:28:04 +03:00
|
|
|
if (!io_is_uring_fops(f.file))
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
goto out_fput;
|
|
|
|
|
|
|
|
ctx = f.file->private_data;
|
|
|
|
|
2021-02-20 18:17:18 +03:00
|
|
|
io_run_task_work();
|
|
|
|
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ret = __io_uring_register(ctx, opcode, arg, nr_args);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2022-02-04 17:51:13 +03:00
|
|
|
trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs, ret);
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
|
|
|
out_fput:
|
|
|
|
fdput(f);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2022-05-24 01:56:21 +03:00
|
|
|
static int io_no_issue(struct io_kiocb *req, unsigned int issue_flags)
|
|
|
|
{
|
|
|
|
WARN_ON_ONCE(1);
|
|
|
|
return -ECANCELED;
|
|
|
|
}
|
|
|
|
|
2022-05-26 05:31:09 +03:00
|
|
|
const struct io_op_def io_op_defs[] = {
|
2022-05-24 01:56:21 +03:00
|
|
|
[IORING_OP_NOP] = {
|
|
|
|
.audit_skip = 1,
|
|
|
|
.iopoll = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "NOP",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_nop_prep,
|
|
|
|
.issue = io_nop,
|
|
|
|
},
|
|
|
|
[IORING_OP_READV] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollin = 1,
|
|
|
|
.buffer_select = 1,
|
|
|
|
.plug = 1,
|
|
|
|
.audit_skip = 1,
|
|
|
|
.ioprio = 1,
|
|
|
|
.iopoll = 1,
|
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "READV",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_prep_rw,
|
|
|
|
.issue = io_read,
|
2022-05-24 02:30:37 +03:00
|
|
|
.prep_async = io_readv_prep_async,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_readv_writev_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_WRITEV] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollout = 1,
|
|
|
|
.plug = 1,
|
|
|
|
.audit_skip = 1,
|
|
|
|
.ioprio = 1,
|
|
|
|
.iopoll = 1,
|
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "WRITEV",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_prep_rw,
|
|
|
|
.issue = io_write,
|
2022-05-24 02:30:37 +03:00
|
|
|
.prep_async = io_writev_prep_async,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_readv_writev_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_FSYNC] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "FSYNC",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_fsync_prep,
|
|
|
|
.issue = io_fsync,
|
|
|
|
},
|
|
|
|
[IORING_OP_READ_FIXED] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollin = 1,
|
|
|
|
.plug = 1,
|
|
|
|
.audit_skip = 1,
|
|
|
|
.ioprio = 1,
|
|
|
|
.iopoll = 1,
|
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "READ_FIXED",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_prep_rw,
|
|
|
|
.issue = io_read,
|
|
|
|
},
|
|
|
|
[IORING_OP_WRITE_FIXED] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollout = 1,
|
|
|
|
.plug = 1,
|
|
|
|
.audit_skip = 1,
|
|
|
|
.ioprio = 1,
|
|
|
|
.iopoll = 1,
|
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "WRITE_FIXED",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_prep_rw,
|
|
|
|
.issue = io_write,
|
|
|
|
},
|
|
|
|
[IORING_OP_POLL_ADD] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "POLL_ADD",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_poll_add_prep,
|
|
|
|
.issue = io_poll_add,
|
|
|
|
},
|
|
|
|
[IORING_OP_POLL_REMOVE] = {
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "POLL_REMOVE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_poll_remove_prep,
|
|
|
|
.issue = io_poll_remove,
|
|
|
|
},
|
|
|
|
[IORING_OP_SYNC_FILE_RANGE] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "SYNC_FILE_RANGE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_sfr_prep,
|
|
|
|
.issue = io_sync_file_range,
|
|
|
|
},
|
|
|
|
[IORING_OP_SENDMSG] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollout = 1,
|
|
|
|
.ioprio = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "SENDMSG",
|
2022-05-25 15:25:13 +03:00
|
|
|
#if defined(CONFIG_NET)
|
2022-05-24 01:56:21 +03:00
|
|
|
.async_size = sizeof(struct io_async_msghdr),
|
|
|
|
.prep = io_sendmsg_prep,
|
|
|
|
.issue = io_sendmsg,
|
2022-05-24 02:30:37 +03:00
|
|
|
.prep_async = io_sendmsg_prep_async,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_sendmsg_recvmsg_cleanup,
|
2022-05-25 15:25:13 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
2022-05-24 19:26:28 +03:00
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_RECVMSG] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollin = 1,
|
|
|
|
.buffer_select = 1,
|
|
|
|
.ioprio = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "RECVMSG",
|
2022-05-25 15:25:13 +03:00
|
|
|
#if defined(CONFIG_NET)
|
2022-05-24 01:56:21 +03:00
|
|
|
.async_size = sizeof(struct io_async_msghdr),
|
|
|
|
.prep = io_recvmsg_prep,
|
|
|
|
.issue = io_recvmsg,
|
2022-05-24 02:30:37 +03:00
|
|
|
.prep_async = io_recvmsg_prep_async,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_sendmsg_recvmsg_cleanup,
|
2022-05-25 15:25:13 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
2022-05-24 19:26:28 +03:00
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_TIMEOUT] = {
|
|
|
|
.audit_skip = 1,
|
|
|
|
.async_size = sizeof(struct io_timeout_data),
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "TIMEOUT",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_timeout_prep,
|
|
|
|
.issue = io_timeout,
|
|
|
|
},
|
|
|
|
[IORING_OP_TIMEOUT_REMOVE] = {
|
|
|
|
/* used by timeout updates' prep() */
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "TIMEOUT_REMOVE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_timeout_remove_prep,
|
|
|
|
.issue = io_timeout_remove,
|
|
|
|
},
|
|
|
|
[IORING_OP_ACCEPT] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollin = 1,
|
|
|
|
.poll_exclusive = 1,
|
|
|
|
.ioprio = 1, /* used for flags */
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "ACCEPT",
|
2022-05-25 15:25:13 +03:00
|
|
|
#if defined(CONFIG_NET)
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_accept_prep,
|
|
|
|
.issue = io_accept,
|
2022-05-25 15:25:13 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_ASYNC_CANCEL] = {
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "ASYNC_CANCEL",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_async_cancel_prep,
|
|
|
|
.issue = io_async_cancel,
|
|
|
|
},
|
|
|
|
[IORING_OP_LINK_TIMEOUT] = {
|
|
|
|
.audit_skip = 1,
|
|
|
|
.async_size = sizeof(struct io_timeout_data),
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "LINK_TIMEOUT",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_link_timeout_prep,
|
|
|
|
.issue = io_no_issue,
|
|
|
|
},
|
|
|
|
[IORING_OP_CONNECT] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollout = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "CONNECT",
|
2022-05-25 15:25:13 +03:00
|
|
|
#if defined(CONFIG_NET)
|
2022-05-24 01:56:21 +03:00
|
|
|
.async_size = sizeof(struct io_async_connect),
|
|
|
|
.prep = io_connect_prep,
|
|
|
|
.issue = io_connect,
|
2022-05-24 02:30:37 +03:00
|
|
|
.prep_async = io_connect_prep_async,
|
2022-05-25 15:25:13 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_FALLOCATE] = {
|
|
|
|
.needs_file = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "FALLOCATE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_fallocate_prep,
|
|
|
|
.issue = io_fallocate,
|
|
|
|
},
|
|
|
|
[IORING_OP_OPENAT] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "OPENAT",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_openat_prep,
|
|
|
|
.issue = io_openat,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_open_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_CLOSE] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "CLOSE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_close_prep,
|
|
|
|
.issue = io_close,
|
|
|
|
},
|
|
|
|
[IORING_OP_FILES_UPDATE] = {
|
|
|
|
.audit_skip = 1,
|
|
|
|
.iopoll = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "FILES_UPDATE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_files_update_prep,
|
|
|
|
.issue = io_files_update,
|
|
|
|
},
|
|
|
|
[IORING_OP_STATX] = {
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "STATX",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_statx_prep,
|
|
|
|
.issue = io_statx,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_statx_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_READ] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollin = 1,
|
|
|
|
.buffer_select = 1,
|
|
|
|
.plug = 1,
|
|
|
|
.audit_skip = 1,
|
|
|
|
.ioprio = 1,
|
|
|
|
.iopoll = 1,
|
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "READ",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_prep_rw,
|
|
|
|
.issue = io_read,
|
|
|
|
},
|
|
|
|
[IORING_OP_WRITE] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollout = 1,
|
|
|
|
.plug = 1,
|
|
|
|
.audit_skip = 1,
|
|
|
|
.ioprio = 1,
|
|
|
|
.iopoll = 1,
|
|
|
|
.async_size = sizeof(struct io_async_rw),
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "WRITE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_prep_rw,
|
|
|
|
.issue = io_write,
|
|
|
|
},
|
|
|
|
[IORING_OP_FADVISE] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "FADVISE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_fadvise_prep,
|
|
|
|
.issue = io_fadvise,
|
|
|
|
},
|
|
|
|
[IORING_OP_MADVISE] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "MADVISE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_madvise_prep,
|
|
|
|
.issue = io_madvise,
|
|
|
|
},
|
|
|
|
[IORING_OP_SEND] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollout = 1,
|
|
|
|
.audit_skip = 1,
|
|
|
|
.ioprio = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "SEND",
|
2022-05-25 15:25:13 +03:00
|
|
|
#if defined(CONFIG_NET)
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_sendmsg_prep,
|
|
|
|
.issue = io_send,
|
2022-05-25 15:25:13 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_RECV] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.pollin = 1,
|
|
|
|
.buffer_select = 1,
|
|
|
|
.audit_skip = 1,
|
|
|
|
.ioprio = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "RECV",
|
2022-05-25 15:25:13 +03:00
|
|
|
#if defined(CONFIG_NET)
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_recvmsg_prep,
|
|
|
|
.issue = io_recv,
|
2022-05-25 15:25:13 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_OPENAT2] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "OPENAT2",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_openat2_prep,
|
|
|
|
.issue = io_openat2,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_open_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_EPOLL_CTL] = {
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "EPOLL",
|
2022-05-25 15:04:14 +03:00
|
|
|
#if defined(CONFIG_EPOLL)
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_epoll_ctl_prep,
|
|
|
|
.issue = io_epoll_ctl,
|
2022-05-25 15:04:14 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_SPLICE] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "SPLICE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_splice_prep,
|
|
|
|
.issue = io_splice,
|
|
|
|
},
|
|
|
|
[IORING_OP_PROVIDE_BUFFERS] = {
|
|
|
|
.audit_skip = 1,
|
|
|
|
.iopoll = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "PROVIDE_BUFFERS",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_provide_buffers_prep,
|
|
|
|
.issue = io_provide_buffers,
|
|
|
|
},
|
|
|
|
[IORING_OP_REMOVE_BUFFERS] = {
|
|
|
|
.audit_skip = 1,
|
|
|
|
.iopoll = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "REMOVE_BUFFERS",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_remove_buffers_prep,
|
|
|
|
.issue = io_remove_buffers,
|
|
|
|
},
|
|
|
|
[IORING_OP_TEE] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.hash_reg_file = 1,
|
|
|
|
.unbound_nonreg_file = 1,
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "TEE",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_tee_prep,
|
|
|
|
.issue = io_tee,
|
|
|
|
},
|
|
|
|
[IORING_OP_SHUTDOWN] = {
|
|
|
|
.needs_file = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "SHUTDOWN",
|
2022-05-25 15:25:13 +03:00
|
|
|
#if defined(CONFIG_NET)
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_shutdown_prep,
|
|
|
|
.issue = io_shutdown,
|
2022-05-25 15:25:13 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_RENAMEAT] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "RENAMEAT",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_renameat_prep,
|
|
|
|
.issue = io_renameat,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_renameat_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_UNLINKAT] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "UNLINKAT",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_unlinkat_prep,
|
|
|
|
.issue = io_unlinkat,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_unlinkat_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_MKDIRAT] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "MKDIRAT",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_mkdirat_prep,
|
|
|
|
.issue = io_mkdirat,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_mkdirat_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_SYMLINKAT] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "SYMLINKAT",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_symlinkat_prep,
|
|
|
|
.issue = io_symlinkat,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_link_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_LINKAT] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "LINKAT",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_linkat_prep,
|
|
|
|
.issue = io_linkat,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_link_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_MSG_RING] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.iopoll = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "MSG_RING",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_msg_ring_prep,
|
|
|
|
.issue = io_msg_ring,
|
|
|
|
},
|
|
|
|
[IORING_OP_FSETXATTR] = {
|
|
|
|
.needs_file = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "FSETXATTR",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_fsetxattr_prep,
|
|
|
|
.issue = io_fsetxattr,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_xattr_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_SETXATTR] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "SETXATTR",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_setxattr_prep,
|
|
|
|
.issue = io_setxattr,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_xattr_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_FGETXATTR] = {
|
|
|
|
.needs_file = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "FGETXATTR",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_fgetxattr_prep,
|
|
|
|
.issue = io_fgetxattr,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_xattr_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_GETXATTR] = {
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "GETXATTR",
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_getxattr_prep,
|
|
|
|
.issue = io_getxattr,
|
2022-05-24 19:26:28 +03:00
|
|
|
.cleanup = io_xattr_cleanup,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_SOCKET] = {
|
|
|
|
.audit_skip = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "SOCKET",
|
2022-05-25 15:25:13 +03:00
|
|
|
#if defined(CONFIG_NET)
|
2022-05-24 01:56:21 +03:00
|
|
|
.prep = io_socket_prep,
|
|
|
|
.issue = io_socket,
|
2022-05-25 15:25:13 +03:00
|
|
|
#else
|
|
|
|
.prep = io_eopnotsupp_prep,
|
|
|
|
#endif
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
[IORING_OP_URING_CMD] = {
|
|
|
|
.needs_file = 1,
|
|
|
|
.plug = 1,
|
2022-05-25 20:57:03 +03:00
|
|
|
.name = "URING_CMD",
|
2022-05-24 01:56:21 +03:00
|
|
|
.async_size = uring_cmd_pdu_size(1),
|
|
|
|
.prep = io_uring_cmd_prep,
|
|
|
|
.issue = io_uring_cmd,
|
2022-05-24 02:30:37 +03:00
|
|
|
.prep_async = io_uring_cmd_prep_async,
|
2022-05-24 01:56:21 +03:00
|
|
|
},
|
|
|
|
};
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
static int __init io_uring_init(void)
|
|
|
|
{
|
2022-05-24 01:56:21 +03:00
|
|
|
int i;
|
|
|
|
|
2020-01-29 16:39:41 +03:00
|
|
|
#define __BUILD_BUG_VERIFY_ELEMENT(stype, eoffset, etype, ename) do { \
|
|
|
|
BUILD_BUG_ON(offsetof(stype, ename) != eoffset); \
|
|
|
|
BUILD_BUG_ON(sizeof(etype) != sizeof_field(stype, ename)); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define BUILD_BUG_SQE_ELEM(eoffset, etype, ename) \
|
|
|
|
__BUILD_BUG_VERIFY_ELEMENT(struct io_uring_sqe, eoffset, etype, ename)
|
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_sqe) != 64);
|
|
|
|
BUILD_BUG_SQE_ELEM(0, __u8, opcode);
|
|
|
|
BUILD_BUG_SQE_ELEM(1, __u8, flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(2, __u16, ioprio);
|
|
|
|
BUILD_BUG_SQE_ELEM(4, __s32, fd);
|
|
|
|
BUILD_BUG_SQE_ELEM(8, __u64, off);
|
|
|
|
BUILD_BUG_SQE_ELEM(8, __u64, addr2);
|
|
|
|
BUILD_BUG_SQE_ELEM(16, __u64, addr);
|
2020-02-24 11:32:45 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(16, __u64, splice_off_in);
|
2020-01-29 16:39:41 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(24, __u32, len);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __kernel_rwf_t, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ int, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ __u32, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, fsync_flags);
|
2020-06-17 12:53:55 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ __u16, poll_events);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, poll32_events);
|
2020-01-29 16:39:41 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, sync_range_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, msg_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, timeout_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, accept_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, cancel_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, open_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, statx_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, fadvise_advice);
|
2020-02-24 11:32:45 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, splice_flags);
|
2020-01-29 16:39:41 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(32, __u64, user_data);
|
|
|
|
BUILD_BUG_SQE_ELEM(40, __u16, buf_index);
|
2021-06-24 17:09:58 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(40, __u16, buf_group);
|
2020-01-29 16:39:41 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(42, __u16, personality);
|
2020-02-24 11:32:45 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __s32, splice_fd_in);
|
2021-08-25 14:25:45 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __u32, file_index);
|
2022-03-23 18:44:19 +03:00
|
|
|
BUILD_BUG_SQE_ELEM(48, __u64, addr3);
|
2020-01-29 16:39:41 +03:00
|
|
|
|
2021-04-27 18:13:53 +03:00
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_files_update) !=
|
|
|
|
sizeof(struct io_uring_rsrc_update));
|
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_rsrc_update) >
|
|
|
|
sizeof(struct io_uring_rsrc_update2));
|
2021-08-25 22:51:40 +03:00
|
|
|
|
|
|
|
/* ->buf_index is u16 */
|
|
|
|
BUILD_BUG_ON(IORING_MAX_REG_BUFFERS >= (1u << 16));
|
2022-05-01 19:52:44 +03:00
|
|
|
BUILD_BUG_ON(BGID_ARRAY * sizeof(struct io_buffer_list) > PAGE_SIZE);
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 23:38:53 +03:00
|
|
|
BUILD_BUG_ON(offsetof(struct io_uring_buf_ring, bufs) != 0);
|
|
|
|
BUILD_BUG_ON(offsetof(struct io_uring_buf, resv) !=
|
|
|
|
offsetof(struct io_uring_buf_ring, tail));
|
2021-08-25 22:51:40 +03:00
|
|
|
|
2021-04-27 18:13:53 +03:00
|
|
|
/* should fit into one byte */
|
|
|
|
BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
|
2021-09-15 14:03:38 +03:00
|
|
|
BUILD_BUG_ON(SQE_COMMON_FLAGS >= (1 << 8));
|
|
|
|
BUILD_BUG_ON((SQE_VALID_FLAGS | SQE_COMMON_FLAGS) != SQE_VALID_FLAGS);
|
2021-04-27 18:13:53 +03:00
|
|
|
|
2019-12-18 19:50:26 +03:00
|
|
|
BUILD_BUG_ON(ARRAY_SIZE(io_op_defs) != IORING_OP_LAST);
|
2021-09-07 06:22:43 +03:00
|
|
|
BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof(int));
|
2021-06-24 17:09:58 +03:00
|
|
|
|
2022-04-26 04:49:00 +03:00
|
|
|
BUILD_BUG_ON(sizeof(atomic_t) != sizeof(u32));
|
|
|
|
|
2022-05-24 01:56:21 +03:00
|
|
|
for (i = 0; i < ARRAY_SIZE(io_op_defs); i++) {
|
|
|
|
BUG_ON(!io_op_defs[i].prep);
|
2022-05-25 15:04:14 +03:00
|
|
|
if (io_op_defs[i].prep != io_eopnotsupp_prep)
|
|
|
|
BUG_ON(!io_op_defs[i].issue);
|
2022-05-25 20:57:03 +03:00
|
|
|
WARN_ON_ONCE(!io_op_defs[i].name);
|
2022-05-24 01:56:21 +03:00
|
|
|
}
|
|
|
|
|
2021-02-09 23:48:50 +03:00
|
|
|
req_cachep = KMEM_CACHE(io_kiocb, SLAB_HWCACHE_ALIGN | SLAB_PANIC |
|
|
|
|
SLAB_ACCOUNT);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
|
|
|
return 0;
|
|
|
|
};
|
|
|
|
__initcall(io_uring_init);
|