License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2010-04-07 02:14:15 +04:00
|
|
|
#include <linux/ceph/ceph_debug.h>
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
#include <linux/crc32c.h>
|
|
|
|
#include <linux/ctype.h>
|
|
|
|
#include <linux/highmem.h>
|
|
|
|
#include <linux/inet.h>
|
|
|
|
#include <linux/kthread.h>
|
|
|
|
#include <linux/net.h>
|
2015-06-25 17:47:45 +03:00
|
|
|
#include <linux/nsproxy.h>
|
2017-03-21 15:44:28 +03:00
|
|
|
#include <linux/sched/mm.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
|
|
|
#include <linux/slab.h>
|
2009-10-06 22:31:13 +04:00
|
|
|
#include <linux/socket.h>
|
|
|
|
#include <linux/string.h>
|
2013-02-01 02:02:01 +04:00
|
|
|
#ifdef CONFIG_BLOCK
|
2010-04-07 02:01:27 +04:00
|
|
|
#include <linux/bio.h>
|
2013-02-01 02:02:01 +04:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2011-09-23 22:48:42 +04:00
|
|
|
#include <linux/dns_resolver.h>
|
2009-10-06 22:31:13 +04:00
|
|
|
#include <net/tcp.h>
|
|
|
|
|
2013-12-24 23:19:24 +04:00
|
|
|
#include <linux/ceph/ceph_features.h>
|
2010-04-07 02:14:15 +04:00
|
|
|
#include <linux/ceph/libceph.h>
|
|
|
|
#include <linux/ceph/messenger.h>
|
|
|
|
#include <linux/ceph/decode.h>
|
|
|
|
#include <linux/ceph/pagelist.h>
|
2011-07-15 19:47:34 +04:00
|
|
|
#include <linux/export.h>
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Ceph uses the messenger to exchange ceph_msg messages with other
|
|
|
|
* hosts in the system. The messenger provides ordered and reliable
|
|
|
|
* delivery. We tolerate TCP disconnects by reconnecting (with
|
|
|
|
* exponential backoff) in the case of a fault (disconnection, bad
|
|
|
|
* crc, protocol error). Acks allow sent messages to be discarded by
|
|
|
|
* the sender.
|
|
|
|
*/
|
|
|
|
|
2012-06-21 06:53:53 +04:00
|
|
|
/*
|
|
|
|
* We track the state of the socket on a given connection using
|
|
|
|
* values defined below. The transition to a new socket state is
|
|
|
|
* handled by a function which verifies we aren't coming from an
|
|
|
|
* unexpected state.
|
|
|
|
*
|
|
|
|
* --------
|
|
|
|
* | NEW* | transient initial state
|
|
|
|
* --------
|
|
|
|
* | con_sock_state_init()
|
|
|
|
* v
|
|
|
|
* ----------
|
|
|
|
* | CLOSED | initialized, but no socket (and no
|
|
|
|
* ---------- TCP connection)
|
|
|
|
* ^ \
|
|
|
|
* | \ con_sock_state_connecting()
|
|
|
|
* | ----------------------
|
|
|
|
* | \
|
|
|
|
* + con_sock_state_closed() \
|
2012-06-27 23:31:02 +04:00
|
|
|
* |+--------------------------- \
|
|
|
|
* | \ \ \
|
|
|
|
* | ----------- \ \
|
|
|
|
* | | CLOSING | socket event; \ \
|
|
|
|
* | ----------- await close \ \
|
|
|
|
* | ^ \ |
|
|
|
|
* | | \ |
|
|
|
|
* | + con_sock_state_closing() \ |
|
|
|
|
* | / \ | |
|
|
|
|
* | / --------------- | |
|
|
|
|
* | / \ v v
|
2012-06-21 06:53:53 +04:00
|
|
|
* | / --------------
|
|
|
|
* | / -----------------| CONNECTING | socket created, TCP
|
|
|
|
* | | / -------------- connect initiated
|
|
|
|
* | | | con_sock_state_connected()
|
|
|
|
* | | v
|
|
|
|
* -------------
|
|
|
|
* | CONNECTED | TCP connection established
|
|
|
|
* -------------
|
|
|
|
*
|
|
|
|
* State values for ceph_connection->sock_state; NEW is assumed to be 0.
|
|
|
|
*/
|
2012-05-23 07:15:49 +04:00
|
|
|
|
|
|
|
#define CON_SOCK_STATE_NEW 0 /* -> CLOSED */
|
|
|
|
#define CON_SOCK_STATE_CLOSED 1 /* -> CONNECTING */
|
|
|
|
#define CON_SOCK_STATE_CONNECTING 2 /* -> CONNECTED or -> CLOSING */
|
|
|
|
#define CON_SOCK_STATE_CONNECTED 3 /* -> CLOSING or -> CLOSED */
|
|
|
|
#define CON_SOCK_STATE_CLOSING 4 /* -> CLOSED */
|
|
|
|
|
2013-02-20 20:25:12 +04:00
|
|
|
static bool con_flag_valid(unsigned long con_flag)
|
|
|
|
{
|
|
|
|
switch (con_flag) {
|
2020-11-09 16:56:36 +03:00
|
|
|
case CEPH_CON_F_LOSSYTX:
|
|
|
|
case CEPH_CON_F_KEEPALIVE_PENDING:
|
|
|
|
case CEPH_CON_F_WRITE_PENDING:
|
|
|
|
case CEPH_CON_F_SOCK_CLOSED:
|
|
|
|
case CEPH_CON_F_BACKOFF:
|
2013-02-20 20:25:12 +04:00
|
|
|
return true;
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_con_flag_clear(struct ceph_connection *con, unsigned long con_flag)
|
2013-02-20 20:25:12 +04:00
|
|
|
{
|
|
|
|
BUG_ON(!con_flag_valid(con_flag));
|
|
|
|
|
|
|
|
clear_bit(con_flag, &con->flags);
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_con_flag_set(struct ceph_connection *con, unsigned long con_flag)
|
2013-02-20 20:25:12 +04:00
|
|
|
{
|
|
|
|
BUG_ON(!con_flag_valid(con_flag));
|
|
|
|
|
|
|
|
set_bit(con_flag, &con->flags);
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
bool ceph_con_flag_test(struct ceph_connection *con, unsigned long con_flag)
|
2013-02-20 20:25:12 +04:00
|
|
|
{
|
|
|
|
BUG_ON(!con_flag_valid(con_flag));
|
|
|
|
|
|
|
|
return test_bit(con_flag, &con->flags);
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
bool ceph_con_flag_test_and_clear(struct ceph_connection *con,
|
|
|
|
unsigned long con_flag)
|
2013-02-20 20:25:12 +04:00
|
|
|
{
|
|
|
|
BUG_ON(!con_flag_valid(con_flag));
|
|
|
|
|
|
|
|
return test_and_clear_bit(con_flag, &con->flags);
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
bool ceph_con_flag_test_and_set(struct ceph_connection *con,
|
|
|
|
unsigned long con_flag)
|
2013-02-20 20:25:12 +04:00
|
|
|
{
|
|
|
|
BUG_ON(!con_flag_valid(con_flag));
|
|
|
|
|
|
|
|
return test_and_set_bit(con_flag, &con->flags);
|
|
|
|
}
|
|
|
|
|
2013-05-01 21:43:04 +04:00
|
|
|
/* Slab caches for frequently-allocated structures */
|
|
|
|
|
|
|
|
static struct kmem_cache *ceph_msg_cache;
|
|
|
|
|
2010-04-14 01:07:07 +04:00
|
|
|
#ifdef CONFIG_LOCKDEP
|
|
|
|
static struct lock_class_key socket_class;
|
|
|
|
#endif
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
static void queue_con(struct ceph_connection *con);
|
2014-06-24 16:21:45 +04:00
|
|
|
static void cancel_con(struct ceph_connection *con);
|
2015-07-03 15:44:41 +03:00
|
|
|
static void ceph_con_workfn(struct work_struct *);
|
2013-02-19 22:25:57 +04:00
|
|
|
static void con_fault(struct ceph_connection *con);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/*
|
2012-01-24 01:49:27 +04:00
|
|
|
* Nicely render a sockaddr as a string. An array of formatted
|
|
|
|
* strings is used, to approximate reentrancy.
|
2009-10-06 22:31:13 +04:00
|
|
|
*/
|
2012-01-24 01:49:27 +04:00
|
|
|
#define ADDR_STR_COUNT_LOG 5 /* log2(# address strings in array) */
|
|
|
|
#define ADDR_STR_COUNT (1 << ADDR_STR_COUNT_LOG)
|
|
|
|
#define ADDR_STR_COUNT_MASK (ADDR_STR_COUNT - 1)
|
|
|
|
#define MAX_ADDR_STR_LEN 64 /* 54 is enough */
|
|
|
|
|
|
|
|
static char addr_str[ADDR_STR_COUNT][MAX_ADDR_STR_LEN];
|
|
|
|
static atomic_t addr_str_seq = ATOMIC_INIT(0);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2020-11-09 16:37:06 +03:00
|
|
|
struct page *ceph_zero_page; /* used in certain error cases */
|
2012-01-24 01:49:27 +04:00
|
|
|
|
2019-05-06 16:38:47 +03:00
|
|
|
const char *ceph_pr_addr(const struct ceph_entity_addr *addr)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
char *s;
|
2019-05-06 16:38:47 +03:00
|
|
|
struct sockaddr_storage ss = addr->in_addr; /* align */
|
|
|
|
struct sockaddr_in *in4 = (struct sockaddr_in *)&ss;
|
|
|
|
struct sockaddr_in6 *in6 = (struct sockaddr_in6 *)&ss;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2012-01-24 01:49:27 +04:00
|
|
|
i = atomic_inc_return(&addr_str_seq) & ADDR_STR_COUNT_MASK;
|
2009-10-06 22:31:13 +04:00
|
|
|
s = addr_str[i];
|
|
|
|
|
2019-05-06 16:38:47 +03:00
|
|
|
switch (ss.ss_family) {
|
2009-10-06 22:31:13 +04:00
|
|
|
case AF_INET:
|
2019-06-17 13:57:25 +03:00
|
|
|
snprintf(s, MAX_ADDR_STR_LEN, "(%d)%pI4:%hu",
|
|
|
|
le32_to_cpu(addr->type), &in4->sin_addr,
|
2012-01-24 01:49:27 +04:00
|
|
|
ntohs(in4->sin_port));
|
2009-10-06 22:31:13 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case AF_INET6:
|
2019-06-17 13:57:25 +03:00
|
|
|
snprintf(s, MAX_ADDR_STR_LEN, "(%d)[%pI6c]:%hu",
|
|
|
|
le32_to_cpu(addr->type), &in6->sin6_addr,
|
2012-01-24 01:49:27 +04:00
|
|
|
ntohs(in6->sin6_port));
|
2009-10-06 22:31:13 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
2012-02-15 00:05:33 +04:00
|
|
|
snprintf(s, MAX_ADDR_STR_LEN, "(unknown sockaddr family %hu)",
|
2019-05-06 16:38:47 +03:00
|
|
|
ss.ss_family);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
return s;
|
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_pr_addr);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_encode_my_addr(struct ceph_messenger *msgr)
|
2009-11-04 02:17:56 +03:00
|
|
|
{
|
2020-11-19 18:59:08 +03:00
|
|
|
if (!ceph_msgr2(from_msgr(msgr))) {
|
|
|
|
memcpy(&msgr->my_enc_addr, &msgr->inst.addr,
|
|
|
|
sizeof(msgr->my_enc_addr));
|
|
|
|
ceph_encode_banner_addr(&msgr->my_enc_addr);
|
|
|
|
}
|
2009-11-04 02:17:56 +03:00
|
|
|
}
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* work queue for all reading and writing to/from the socket.
|
|
|
|
*/
|
2012-02-15 00:05:33 +04:00
|
|
|
static struct workqueue_struct *ceph_msgr_wq;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2013-05-01 21:43:04 +04:00
|
|
|
static int ceph_msgr_slab_init(void)
|
|
|
|
{
|
|
|
|
BUG_ON(ceph_msg_cache);
|
2016-03-13 10:18:39 +03:00
|
|
|
ceph_msg_cache = KMEM_CACHE(ceph_msg, 0);
|
2013-05-01 21:43:04 +04:00
|
|
|
if (!ceph_msg_cache)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2018-10-15 18:38:23 +03:00
|
|
|
return 0;
|
2013-05-01 21:43:04 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void ceph_msgr_slab_exit(void)
|
|
|
|
{
|
|
|
|
BUG_ON(!ceph_msg_cache);
|
|
|
|
kmem_cache_destroy(ceph_msg_cache);
|
|
|
|
ceph_msg_cache = NULL;
|
|
|
|
}
|
|
|
|
|
2013-02-19 22:25:56 +04:00
|
|
|
static void _ceph_msgr_exit(void)
|
2012-02-15 00:05:33 +04:00
|
|
|
{
|
2012-02-15 00:05:33 +04:00
|
|
|
if (ceph_msgr_wq) {
|
2012-02-15 00:05:33 +04:00
|
|
|
destroy_workqueue(ceph_msgr_wq);
|
2012-02-15 00:05:33 +04:00
|
|
|
ceph_msgr_wq = NULL;
|
|
|
|
}
|
2012-02-15 00:05:33 +04:00
|
|
|
|
2020-11-09 16:37:06 +03:00
|
|
|
BUG_ON(!ceph_zero_page);
|
|
|
|
put_page(ceph_zero_page);
|
|
|
|
ceph_zero_page = NULL;
|
2015-06-25 22:02:57 +03:00
|
|
|
|
|
|
|
ceph_msgr_slab_exit();
|
2012-02-15 00:05:33 +04:00
|
|
|
}
|
|
|
|
|
2018-03-10 15:32:05 +03:00
|
|
|
int __init ceph_msgr_init(void)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2015-06-25 22:02:57 +03:00
|
|
|
if (ceph_msgr_slab_init())
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2020-11-09 16:37:06 +03:00
|
|
|
BUG_ON(ceph_zero_page);
|
|
|
|
ceph_zero_page = ZERO_PAGE(0);
|
|
|
|
get_page(ceph_zero_page);
|
2012-01-24 01:49:27 +04:00
|
|
|
|
2014-10-10 16:39:05 +04:00
|
|
|
/*
|
|
|
|
* The number of active work items is limited by the number of
|
|
|
|
* connections, so leave @max_active at default.
|
|
|
|
*/
|
|
|
|
ceph_msgr_wq = alloc_workqueue("ceph-msgr", WQ_MEM_RECLAIM, 0);
|
2012-02-15 00:05:33 +04:00
|
|
|
if (ceph_msgr_wq)
|
|
|
|
return 0;
|
2012-01-24 01:49:27 +04:00
|
|
|
|
2012-02-15 00:05:33 +04:00
|
|
|
pr_err("msgr_init failed to create workqueue\n");
|
|
|
|
_ceph_msgr_exit();
|
2012-01-24 01:49:27 +04:00
|
|
|
|
2012-02-15 00:05:33 +04:00
|
|
|
return -ENOMEM;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void ceph_msgr_exit(void)
|
|
|
|
{
|
2012-01-24 01:49:27 +04:00
|
|
|
BUG_ON(ceph_msgr_wq == NULL);
|
|
|
|
|
2012-02-15 00:05:33 +04:00
|
|
|
_ceph_msgr_exit();
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
2010-06-12 03:58:48 +04:00
|
|
|
void ceph_msgr_flush(void)
|
2010-05-29 20:41:23 +04:00
|
|
|
{
|
|
|
|
flush_workqueue(ceph_msgr_wq);
|
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_msgr_flush);
|
2010-05-29 20:41:23 +04:00
|
|
|
|
2012-05-23 07:15:49 +04:00
|
|
|
/* Connection socket state transition functions */
|
|
|
|
|
|
|
|
static void con_sock_state_init(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
int old_state;
|
|
|
|
|
|
|
|
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSED);
|
|
|
|
if (WARN_ON(old_state != CON_SOCK_STATE_NEW))
|
|
|
|
printk("%s: unexpected old state %d\n", __func__, old_state);
|
2012-07-31 05:16:16 +04:00
|
|
|
dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
|
|
|
|
CON_SOCK_STATE_CLOSED);
|
2012-05-23 07:15:49 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void con_sock_state_connecting(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
int old_state;
|
|
|
|
|
|
|
|
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CONNECTING);
|
|
|
|
if (WARN_ON(old_state != CON_SOCK_STATE_CLOSED))
|
|
|
|
printk("%s: unexpected old state %d\n", __func__, old_state);
|
2012-07-31 05:16:16 +04:00
|
|
|
dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
|
|
|
|
CON_SOCK_STATE_CONNECTING);
|
2012-05-23 07:15:49 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void con_sock_state_connected(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
int old_state;
|
|
|
|
|
|
|
|
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CONNECTED);
|
|
|
|
if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTING))
|
|
|
|
printk("%s: unexpected old state %d\n", __func__, old_state);
|
2012-07-31 05:16:16 +04:00
|
|
|
dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
|
|
|
|
CON_SOCK_STATE_CONNECTED);
|
2012-05-23 07:15:49 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void con_sock_state_closing(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
int old_state;
|
|
|
|
|
|
|
|
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSING);
|
|
|
|
if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTING &&
|
|
|
|
old_state != CON_SOCK_STATE_CONNECTED &&
|
|
|
|
old_state != CON_SOCK_STATE_CLOSING))
|
|
|
|
printk("%s: unexpected old state %d\n", __func__, old_state);
|
2012-07-31 05:16:16 +04:00
|
|
|
dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
|
|
|
|
CON_SOCK_STATE_CLOSING);
|
2012-05-23 07:15:49 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void con_sock_state_closed(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
int old_state;
|
|
|
|
|
|
|
|
old_state = atomic_xchg(&con->sock_state, CON_SOCK_STATE_CLOSED);
|
|
|
|
if (WARN_ON(old_state != CON_SOCK_STATE_CONNECTED &&
|
2012-06-27 23:31:02 +04:00
|
|
|
old_state != CON_SOCK_STATE_CLOSING &&
|
2012-07-31 05:16:16 +04:00
|
|
|
old_state != CON_SOCK_STATE_CONNECTING &&
|
|
|
|
old_state != CON_SOCK_STATE_CLOSED))
|
2012-05-23 07:15:49 +04:00
|
|
|
printk("%s: unexpected old state %d\n", __func__, old_state);
|
2012-07-31 05:16:16 +04:00
|
|
|
dout("%s con %p sock %d -> %d\n", __func__, con, old_state,
|
|
|
|
CON_SOCK_STATE_CLOSED);
|
2012-05-23 07:15:49 +04:00
|
|
|
}
|
2010-05-29 20:41:23 +04:00
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* socket callback functions
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* data available on socket, or listen socket received a connect */
|
2014-04-12 00:15:36 +04:00
|
|
|
static void ceph_sock_data_ready(struct sock *sk)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2012-01-24 01:49:27 +04:00
|
|
|
struct ceph_connection *con = sk->sk_user_data;
|
2012-07-09 06:50:33 +04:00
|
|
|
if (atomic_read(&con->msgr->stopping)) {
|
|
|
|
return;
|
|
|
|
}
|
2012-01-24 01:49:27 +04:00
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
if (sk->sk_state != TCP_CLOSE_WAIT) {
|
2020-11-09 16:11:26 +03:00
|
|
|
dout("%s %p state = %d, queueing work\n", __func__,
|
2009-10-06 22:31:13 +04:00
|
|
|
con, con->state);
|
|
|
|
queue_con(con);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* socket has buffer space for writing */
|
2012-05-22 20:41:43 +04:00
|
|
|
static void ceph_sock_write_space(struct sock *sk)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2012-02-15 00:05:33 +04:00
|
|
|
struct ceph_connection *con = sk->sk_user_data;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2012-02-29 19:30:58 +04:00
|
|
|
/* only queue to workqueue if there is data we want to write,
|
|
|
|
* and there is sufficient space in the socket buffer to accept
|
2012-05-22 20:41:43 +04:00
|
|
|
* more data. clear SOCK_NOSPACE so that ceph_sock_write_space()
|
2012-02-29 19:30:58 +04:00
|
|
|
* doesn't get called again until try_write() fills the socket
|
|
|
|
* buffer. See net/ipv4/tcp_input.c:tcp_check_space()
|
|
|
|
* and net/core/stream.c:sk_stream_write_space().
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
if (ceph_con_flag_test(con, CEPH_CON_F_WRITE_PENDING)) {
|
2013-07-23 07:26:31 +04:00
|
|
|
if (sk_stream_is_writeable(sk)) {
|
2012-05-22 20:41:43 +04:00
|
|
|
dout("%s %p queueing write work\n", __func__, con);
|
2012-02-29 19:30:58 +04:00
|
|
|
clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
|
|
|
|
queue_con(con);
|
|
|
|
}
|
2009-10-06 22:31:13 +04:00
|
|
|
} else {
|
2012-05-22 20:41:43 +04:00
|
|
|
dout("%s %p nothing to write\n", __func__, con);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* socket's state has changed */
|
2012-05-22 20:41:43 +04:00
|
|
|
static void ceph_sock_state_change(struct sock *sk)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2012-01-24 01:49:27 +04:00
|
|
|
struct ceph_connection *con = sk->sk_user_data;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2020-11-09 16:11:26 +03:00
|
|
|
dout("%s %p state = %d sk_state = %u\n", __func__,
|
2009-10-06 22:31:13 +04:00
|
|
|
con, con->state, sk->sk_state);
|
|
|
|
|
|
|
|
switch (sk->sk_state) {
|
|
|
|
case TCP_CLOSE:
|
2012-05-22 20:41:43 +04:00
|
|
|
dout("%s TCP_CLOSE\n", __func__);
|
2020-08-24 01:36:59 +03:00
|
|
|
fallthrough;
|
2009-10-06 22:31:13 +04:00
|
|
|
case TCP_CLOSE_WAIT:
|
2012-05-22 20:41:43 +04:00
|
|
|
dout("%s TCP_CLOSE_WAIT\n", __func__);
|
2012-05-23 07:15:49 +04:00
|
|
|
con_sock_state_closing(con);
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_con_flag_set(con, CEPH_CON_F_SOCK_CLOSED);
|
2012-06-21 06:53:53 +04:00
|
|
|
queue_con(con);
|
2009-10-06 22:31:13 +04:00
|
|
|
break;
|
|
|
|
case TCP_ESTABLISHED:
|
2012-05-22 20:41:43 +04:00
|
|
|
dout("%s TCP_ESTABLISHED\n", __func__);
|
2012-05-23 07:15:49 +04:00
|
|
|
con_sock_state_connected(con);
|
2009-10-06 22:31:13 +04:00
|
|
|
queue_con(con);
|
|
|
|
break;
|
2012-02-15 00:05:33 +04:00
|
|
|
default: /* Everything else is uninteresting */
|
|
|
|
break;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* set up socket callbacks
|
|
|
|
*/
|
|
|
|
static void set_sock_callbacks(struct socket *sock,
|
|
|
|
struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
struct sock *sk = sock->sk;
|
2012-01-24 01:49:27 +04:00
|
|
|
sk->sk_user_data = con;
|
2012-05-22 20:41:43 +04:00
|
|
|
sk->sk_data_ready = ceph_sock_data_ready;
|
|
|
|
sk->sk_write_space = ceph_sock_write_space;
|
|
|
|
sk->sk_state_change = ceph_sock_state_change;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* socket helpers
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* initiate connection to a remote socket.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
int ceph_tcp_connect(struct ceph_connection *con)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2019-05-06 16:38:46 +03:00
|
|
|
struct sockaddr_storage ss = con->peer_addr.in_addr; /* align */
|
2009-10-06 22:31:13 +04:00
|
|
|
struct socket *sock;
|
2017-03-21 15:44:28 +03:00
|
|
|
unsigned int noio_flag;
|
2009-10-06 22:31:13 +04:00
|
|
|
int ret;
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
dout("%s con %p peer_addr %s\n", __func__, con,
|
|
|
|
ceph_pr_addr(&con->peer_addr));
|
2009-10-06 22:31:13 +04:00
|
|
|
BUG_ON(con->sock);
|
2017-03-21 15:44:28 +03:00
|
|
|
|
|
|
|
/* sock_create_kern() allocates with GFP_KERNEL */
|
|
|
|
noio_flag = memalloc_noio_save();
|
2019-05-06 16:38:46 +03:00
|
|
|
ret = sock_create_kern(read_pnet(&con->msgr->net), ss.ss_family,
|
2015-05-09 05:08:05 +03:00
|
|
|
SOCK_STREAM, IPPROTO_TCP, &sock);
|
2017-03-21 15:44:28 +03:00
|
|
|
memalloc_noio_restore(noio_flag);
|
2009-10-06 22:31:13 +04:00
|
|
|
if (ret)
|
2012-02-15 00:05:33 +04:00
|
|
|
return ret;
|
2015-04-02 14:40:58 +03:00
|
|
|
sock->sk->sk_allocation = GFP_NOFS;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2010-04-14 01:07:07 +04:00
|
|
|
#ifdef CONFIG_LOCKDEP
|
|
|
|
lockdep_set_class(&sock->sk->sk_lock, &socket_class);
|
|
|
|
#endif
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
set_sock_callbacks(sock, con);
|
|
|
|
|
2012-06-10 01:19:21 +04:00
|
|
|
con_sock_state_connecting(con);
|
2019-05-06 16:38:46 +03:00
|
|
|
ret = sock->ops->connect(sock, (struct sockaddr *)&ss, sizeof(ss),
|
2010-07-02 02:18:31 +04:00
|
|
|
O_NONBLOCK);
|
2009-10-06 22:31:13 +04:00
|
|
|
if (ret == -EINPROGRESS) {
|
|
|
|
dout("connect %s EINPROGRESS sk_state = %u\n",
|
2019-05-06 16:38:47 +03:00
|
|
|
ceph_pr_addr(&con->peer_addr),
|
2009-10-06 22:31:13 +04:00
|
|
|
sock->sk->sk_state);
|
2012-01-24 01:49:27 +04:00
|
|
|
} else if (ret < 0) {
|
2009-10-06 22:31:13 +04:00
|
|
|
pr_err("connect %s error %d\n",
|
2019-05-06 16:38:47 +03:00
|
|
|
ceph_pr_addr(&con->peer_addr), ret);
|
2009-10-06 22:31:13 +04:00
|
|
|
sock_release(sock);
|
2012-02-15 00:05:33 +04:00
|
|
|
return ret;
|
2012-01-24 01:49:27 +04:00
|
|
|
}
|
2014-10-16 10:50:19 +04:00
|
|
|
|
2020-05-28 08:12:19 +03:00
|
|
|
if (ceph_test_opt(from_msgr(con->msgr), TCP_NODELAY))
|
|
|
|
tcp_sock_set_nodelay(sock->sk);
|
2015-01-23 14:11:25 +03:00
|
|
|
|
2012-01-24 01:49:27 +04:00
|
|
|
con->sock = sock;
|
2012-02-15 00:05:33 +04:00
|
|
|
return 0;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Shutdown/close the socket for the given connection.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
int ceph_con_close_socket(struct ceph_connection *con)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2012-07-31 05:16:16 +04:00
|
|
|
int rc = 0;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
dout("%s con %p sock %p\n", __func__, con, con->sock);
|
2012-07-31 05:16:16 +04:00
|
|
|
if (con->sock) {
|
|
|
|
rc = con->sock->ops->shutdown(con->sock, SHUT_RDWR);
|
|
|
|
sock_release(con->sock);
|
|
|
|
con->sock = NULL;
|
|
|
|
}
|
2012-06-21 06:53:53 +04:00
|
|
|
|
|
|
|
/*
|
2012-07-21 04:29:55 +04:00
|
|
|
* Forcibly clear the SOCK_CLOSED flag. It gets set
|
2012-06-21 06:53:53 +04:00
|
|
|
* independent of the connection mutex, and we could have
|
|
|
|
* received a socket close event before we had the chance to
|
|
|
|
* shut the socket down.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_con_flag_clear(con, CEPH_CON_F_SOCK_CLOSED);
|
2012-07-31 05:16:16 +04:00
|
|
|
|
2012-05-23 07:15:49 +04:00
|
|
|
con_sock_state_closed(con);
|
2009-10-06 22:31:13 +04:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2020-11-06 19:07:07 +03:00
|
|
|
static void ceph_con_reset_protocol(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
dout("%s con %p\n", __func__, con);
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_con_close_socket(con);
|
2020-11-06 19:07:07 +03:00
|
|
|
if (con->in_msg) {
|
|
|
|
WARN_ON(con->in_msg->con != con);
|
|
|
|
ceph_msg_put(con->in_msg);
|
|
|
|
con->in_msg = NULL;
|
|
|
|
}
|
|
|
|
if (con->out_msg) {
|
|
|
|
WARN_ON(con->out_msg->con != con);
|
|
|
|
ceph_msg_put(con->out_msg);
|
|
|
|
con->out_msg = NULL;
|
|
|
|
}
|
2021-12-30 17:13:32 +03:00
|
|
|
if (con->bounce_page) {
|
|
|
|
__free_page(con->bounce_page);
|
|
|
|
con->bounce_page = NULL;
|
|
|
|
}
|
2020-11-06 19:07:07 +03:00
|
|
|
|
2020-11-19 18:59:08 +03:00
|
|
|
if (ceph_msgr2(from_msgr(con->msgr)))
|
|
|
|
ceph_con_v2_reset_protocol(con);
|
|
|
|
else
|
|
|
|
ceph_con_v1_reset_protocol(con);
|
2020-11-06 19:07:07 +03:00
|
|
|
}
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* Reset a connection. Discard all incoming and outgoing messages
|
|
|
|
* and clear *_seq state.
|
|
|
|
*/
|
|
|
|
static void ceph_msg_remove(struct ceph_msg *msg)
|
|
|
|
{
|
|
|
|
list_del_init(&msg->list_head);
|
2012-06-01 23:56:43 +04:00
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
ceph_msg_put(msg);
|
|
|
|
}
|
2020-11-19 18:59:08 +03:00
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
static void ceph_msg_remove_list(struct list_head *head)
|
|
|
|
{
|
|
|
|
while (!list_empty(head)) {
|
|
|
|
struct ceph_msg *msg = list_first_entry(head, struct ceph_msg,
|
|
|
|
list_head);
|
|
|
|
ceph_msg_remove(msg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_con_reset_session(struct ceph_connection *con)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2020-11-06 21:04:30 +03:00
|
|
|
dout("%s con %p\n", __func__, con);
|
2020-11-06 19:07:07 +03:00
|
|
|
|
|
|
|
WARN_ON(con->in_msg);
|
|
|
|
WARN_ON(con->out_msg);
|
2009-10-06 22:31:13 +04:00
|
|
|
ceph_msg_remove_list(&con->out_queue);
|
|
|
|
ceph_msg_remove_list(&con->out_sent);
|
|
|
|
con->out_seq = 0;
|
|
|
|
con->in_seq = 0;
|
2010-04-03 03:07:19 +04:00
|
|
|
con->in_seq_acked = 0;
|
2020-11-11 16:08:59 +03:00
|
|
|
|
2020-11-19 18:59:08 +03:00
|
|
|
if (ceph_msgr2(from_msgr(con->msgr)))
|
|
|
|
ceph_con_v2_reset_session(con);
|
|
|
|
else
|
|
|
|
ceph_con_v1_reset_session(con);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* mark a peer down. drop any open connections.
|
|
|
|
*/
|
|
|
|
void ceph_con_close(struct ceph_connection *con)
|
|
|
|
{
|
2012-07-31 03:24:37 +04:00
|
|
|
mutex_lock(&con->mutex);
|
2019-05-06 16:38:47 +03:00
|
|
|
dout("con_close %p peer %s\n", con, ceph_pr_addr(&con->peer_addr));
|
2020-11-09 16:59:02 +03:00
|
|
|
con->state = CEPH_CON_S_CLOSED;
|
2012-05-29 20:04:58 +04:00
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_con_flag_clear(con, CEPH_CON_F_LOSSYTX); /* so we retry next
|
|
|
|
connect */
|
|
|
|
ceph_con_flag_clear(con, CEPH_CON_F_KEEPALIVE_PENDING);
|
|
|
|
ceph_con_flag_clear(con, CEPH_CON_F_WRITE_PENDING);
|
|
|
|
ceph_con_flag_clear(con, CEPH_CON_F_BACKOFF);
|
2012-05-29 20:04:58 +04:00
|
|
|
|
2020-11-06 19:07:07 +03:00
|
|
|
ceph_con_reset_protocol(con);
|
2020-11-06 21:04:30 +03:00
|
|
|
ceph_con_reset_session(con);
|
2014-06-24 16:21:45 +04:00
|
|
|
cancel_con(con);
|
2009-12-22 21:43:42 +03:00
|
|
|
mutex_unlock(&con->mutex);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_con_close);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Reopen a closed connection, with a new peer address.
|
|
|
|
*/
|
2012-06-27 23:24:08 +04:00
|
|
|
void ceph_con_open(struct ceph_connection *con,
|
|
|
|
__u8 entity_type, __u64 entity_num,
|
|
|
|
struct ceph_entity_addr *addr)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2012-07-31 03:21:40 +04:00
|
|
|
mutex_lock(&con->mutex);
|
2019-05-06 16:38:47 +03:00
|
|
|
dout("con_open %p %s\n", con, ceph_pr_addr(addr));
|
2012-07-21 04:24:40 +04:00
|
|
|
|
2020-11-09 16:59:02 +03:00
|
|
|
WARN_ON(con->state != CEPH_CON_S_CLOSED);
|
|
|
|
con->state = CEPH_CON_S_PREOPEN;
|
2012-05-29 20:04:58 +04:00
|
|
|
|
2012-06-27 23:24:08 +04:00
|
|
|
con->peer_name.type = (__u8) entity_type;
|
|
|
|
con->peer_name.num = cpu_to_le64(entity_num);
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
memcpy(&con->peer_addr, addr, sizeof(*addr));
|
2009-11-21 02:14:15 +03:00
|
|
|
con->delay = 0; /* reset backoff memory */
|
2012-07-31 03:21:40 +04:00
|
|
|
mutex_unlock(&con->mutex);
|
2009-10-06 22:31:13 +04:00
|
|
|
queue_con(con);
|
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_con_open);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
ceph: avoid reopening osd connections when address hasn't changed
We get a fault callback on _every_ tcp connection fault. Normally, we
want to reopen the connection when that happens. If the address we have
is bad, however, and connection attempts always result in a connection
refused or similar error, explicitly closing and reopening the msgr
connection just prevents the messenger's backoff logic from kicking in.
The result can be a console full of
[ 3974.417106] ceph: osd11 10.3.14.138:6800 connection failed
[ 3974.423295] ceph: osd11 10.3.14.138:6800 connection failed
[ 3974.429709] ceph: osd11 10.3.14.138:6800 connection failed
Instead, if we get a fault, and have outstanding requests, but the osd
address hasn't changed and the connection never successfully connected in
the first place, do nothing to the osd connection. The messenger layer
will back off and retry periodically, because we never connected and thus
the lossy bit is not set.
Instead, touch each request's r_stamp so that handle_timeout can tell the
request is still alive and kicking.
Signed-off-by: Sage Weil <sage@newdream.net>
2010-03-23 00:51:18 +03:00
|
|
|
/*
|
|
|
|
* return true if this connection ever successfully opened
|
|
|
|
*/
|
|
|
|
bool ceph_con_opened(struct ceph_connection *con)
|
|
|
|
{
|
2020-11-19 18:59:08 +03:00
|
|
|
if (ceph_msgr2(from_msgr(con->msgr)))
|
|
|
|
return ceph_con_v2_opened(con);
|
|
|
|
|
2020-11-12 14:55:39 +03:00
|
|
|
return ceph_con_v1_opened(con);
|
ceph: avoid reopening osd connections when address hasn't changed
We get a fault callback on _every_ tcp connection fault. Normally, we
want to reopen the connection when that happens. If the address we have
is bad, however, and connection attempts always result in a connection
refused or similar error, explicitly closing and reopening the msgr
connection just prevents the messenger's backoff logic from kicking in.
The result can be a console full of
[ 3974.417106] ceph: osd11 10.3.14.138:6800 connection failed
[ 3974.423295] ceph: osd11 10.3.14.138:6800 connection failed
[ 3974.429709] ceph: osd11 10.3.14.138:6800 connection failed
Instead, if we get a fault, and have outstanding requests, but the osd
address hasn't changed and the connection never successfully connected in
the first place, do nothing to the osd connection. The messenger layer
will back off and retry periodically, because we never connected and thus
the lossy bit is not set.
Instead, touch each request's r_stamp so that handle_timeout can tell the
request is still alive and kicking.
Signed-off-by: Sage Weil <sage@newdream.net>
2010-03-23 00:51:18 +03:00
|
|
|
}
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* initialize a new connection.
|
|
|
|
*/
|
2012-05-27 08:26:43 +04:00
|
|
|
void ceph_con_init(struct ceph_connection *con, void *private,
|
|
|
|
const struct ceph_connection_operations *ops,
|
2012-06-27 23:24:08 +04:00
|
|
|
struct ceph_messenger *msgr)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
|
|
|
dout("con_init %p\n", con);
|
|
|
|
memset(con, 0, sizeof(*con));
|
2012-05-27 08:26:43 +04:00
|
|
|
con->private = private;
|
|
|
|
con->ops = ops;
|
2009-10-06 22:31:13 +04:00
|
|
|
con->msgr = msgr;
|
2012-05-23 07:15:49 +04:00
|
|
|
|
|
|
|
con_sock_state_init(con);
|
|
|
|
|
2009-12-22 21:43:42 +03:00
|
|
|
mutex_init(&con->mutex);
|
2009-10-06 22:31:13 +04:00
|
|
|
INIT_LIST_HEAD(&con->out_queue);
|
|
|
|
INIT_LIST_HEAD(&con->out_sent);
|
2015-07-03 15:44:41 +03:00
|
|
|
INIT_DELAYED_WORK(&con->work, ceph_con_workfn);
|
2012-05-29 20:04:58 +04:00
|
|
|
|
2020-11-09 16:59:02 +03:00
|
|
|
con->state = CEPH_CON_S_CLOSED;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_con_init);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We maintain a global counter to order connection attempts. Get
|
|
|
|
* a unique seq greater than @gt.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
u32 ceph_get_global_seq(struct ceph_messenger *msgr, u32 gt)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
|
|
|
u32 ret;
|
|
|
|
|
|
|
|
spin_lock(&msgr->global_seq_lock);
|
|
|
|
if (msgr->global_seq < gt)
|
|
|
|
msgr->global_seq = gt;
|
|
|
|
ret = ++msgr->global_seq;
|
|
|
|
spin_unlock(&msgr->global_seq_lock);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-10-13 18:38:53 +03:00
|
|
|
/*
|
|
|
|
* Discard messages that have been acked by the server.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_con_discard_sent(struct ceph_connection *con, u64 ack_seq)
|
2020-10-13 18:38:53 +03:00
|
|
|
{
|
|
|
|
struct ceph_msg *msg;
|
|
|
|
u64 seq;
|
|
|
|
|
|
|
|
dout("%s con %p ack_seq %llu\n", __func__, con, ack_seq);
|
|
|
|
while (!list_empty(&con->out_sent)) {
|
|
|
|
msg = list_first_entry(&con->out_sent, struct ceph_msg,
|
|
|
|
list_head);
|
|
|
|
WARN_ON(msg->needs_out_seq);
|
|
|
|
seq = le64_to_cpu(msg->hdr.seq);
|
|
|
|
if (seq > ack_seq)
|
|
|
|
break;
|
|
|
|
|
|
|
|
dout("%s con %p discarding msg %p seq %llu\n", __func__, con,
|
|
|
|
msg, seq);
|
|
|
|
ceph_msg_remove(msg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Discard messages that have been requeued in con_fault(), up to
|
|
|
|
* reconnect_seq. This avoids gratuitously resending messages that
|
|
|
|
* the server had received and handled prior to reconnect.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_con_discard_requeued(struct ceph_connection *con, u64 reconnect_seq)
|
2020-10-13 18:38:53 +03:00
|
|
|
{
|
|
|
|
struct ceph_msg *msg;
|
|
|
|
u64 seq;
|
|
|
|
|
|
|
|
dout("%s con %p reconnect_seq %llu\n", __func__, con, reconnect_seq);
|
|
|
|
while (!list_empty(&con->out_queue)) {
|
|
|
|
msg = list_first_entry(&con->out_queue, struct ceph_msg,
|
|
|
|
list_head);
|
|
|
|
if (msg->needs_out_seq)
|
|
|
|
break;
|
|
|
|
seq = le64_to_cpu(msg->hdr.seq);
|
|
|
|
if (seq > reconnect_seq)
|
|
|
|
break;
|
|
|
|
|
|
|
|
dout("%s con %p discarding msg %p seq %llu\n", __func__, con,
|
|
|
|
msg, seq);
|
|
|
|
ceph_msg_remove(msg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-06-11 23:57:13 +04:00
|
|
|
#ifdef CONFIG_BLOCK
|
2013-03-07 09:39:39 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* For a bio data item, a piece is whatever remains of the next
|
|
|
|
* entry in the current bio iovec, or the first entry in the next
|
|
|
|
* bio in the list.
|
|
|
|
*/
|
2013-03-14 23:09:06 +04:00
|
|
|
static void ceph_msg_data_bio_cursor_init(struct ceph_msg_data_cursor *cursor,
|
2013-03-12 08:34:22 +04:00
|
|
|
size_t length)
|
2013-03-07 09:39:39 +04:00
|
|
|
{
|
2013-03-14 23:09:06 +04:00
|
|
|
struct ceph_msg_data *data = cursor->data;
|
2018-01-20 12:30:10 +03:00
|
|
|
struct ceph_bio_iter *it = &cursor->bio_iter;
|
2013-03-07 09:39:39 +04:00
|
|
|
|
2018-01-20 12:30:10 +03:00
|
|
|
cursor->resid = min_t(size_t, length, data->bio_length);
|
|
|
|
*it = data->bio_pos;
|
|
|
|
if (cursor->resid < it->iter.bi_size)
|
|
|
|
it->iter.bi_size = cursor->resid;
|
2013-03-07 09:39:39 +04:00
|
|
|
|
2018-01-20 12:30:10 +03:00
|
|
|
BUG_ON(cursor->resid < bio_iter_len(it->bio, it->iter));
|
|
|
|
cursor->last_piece = cursor->resid == bio_iter_len(it->bio, it->iter);
|
2013-03-07 09:39:39 +04:00
|
|
|
}
|
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
static struct page *ceph_msg_data_bio_next(struct ceph_msg_data_cursor *cursor,
|
2013-03-07 09:39:39 +04:00
|
|
|
size_t *page_offset,
|
|
|
|
size_t *length)
|
|
|
|
{
|
2018-01-20 12:30:10 +03:00
|
|
|
struct bio_vec bv = bio_iter_iovec(cursor->bio_iter.bio,
|
|
|
|
cursor->bio_iter.iter);
|
2013-03-07 09:39:39 +04:00
|
|
|
|
2018-01-20 12:30:10 +03:00
|
|
|
*page_offset = bv.bv_offset;
|
|
|
|
*length = bv.bv_len;
|
|
|
|
return bv.bv_page;
|
2013-03-07 09:39:39 +04:00
|
|
|
}
|
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
static bool ceph_msg_data_bio_advance(struct ceph_msg_data_cursor *cursor,
|
|
|
|
size_t bytes)
|
2013-03-07 09:39:39 +04:00
|
|
|
{
|
2018-01-20 12:30:10 +03:00
|
|
|
struct ceph_bio_iter *it = &cursor->bio_iter;
|
2019-03-23 00:14:19 +03:00
|
|
|
struct page *page = bio_iter_page(it->bio, it->iter);
|
2013-03-07 09:39:39 +04:00
|
|
|
|
2018-01-20 12:30:10 +03:00
|
|
|
BUG_ON(bytes > cursor->resid);
|
|
|
|
BUG_ON(bytes > bio_iter_len(it->bio, it->iter));
|
2013-03-12 08:34:22 +04:00
|
|
|
cursor->resid -= bytes;
|
2018-01-20 12:30:10 +03:00
|
|
|
bio_advance_iter(it->bio, &it->iter, bytes);
|
2013-08-08 01:30:24 +04:00
|
|
|
|
2018-01-20 12:30:10 +03:00
|
|
|
if (!cursor->resid) {
|
|
|
|
BUG_ON(!cursor->last_piece);
|
|
|
|
return false; /* no more data */
|
|
|
|
}
|
2013-08-08 01:30:24 +04:00
|
|
|
|
2019-03-23 00:14:19 +03:00
|
|
|
if (!bytes || (it->iter.bi_size && it->iter.bi_bvec_done &&
|
|
|
|
page == bio_iter_page(it->bio, it->iter)))
|
2013-03-07 09:39:39 +04:00
|
|
|
return false; /* more bytes to process in this segment */
|
|
|
|
|
2018-01-20 12:30:10 +03:00
|
|
|
if (!it->iter.bi_size) {
|
|
|
|
it->bio = it->bio->bi_next;
|
|
|
|
it->iter = it->bio->bi_iter;
|
|
|
|
if (cursor->resid < it->iter.bi_size)
|
|
|
|
it->iter.bi_size = cursor->resid;
|
2013-03-12 08:34:22 +04:00
|
|
|
}
|
2013-03-07 09:39:39 +04:00
|
|
|
|
2018-01-20 12:30:10 +03:00
|
|
|
BUG_ON(cursor->last_piece);
|
|
|
|
BUG_ON(cursor->resid < bio_iter_len(it->bio, it->iter));
|
|
|
|
cursor->last_piece = cursor->resid == bio_iter_len(it->bio, it->iter);
|
2013-03-07 09:39:39 +04:00
|
|
|
return true;
|
|
|
|
}
|
2013-04-05 23:46:01 +04:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2012-06-11 23:57:13 +04:00
|
|
|
|
2018-01-20 12:30:11 +03:00
|
|
|
static void ceph_msg_data_bvecs_cursor_init(struct ceph_msg_data_cursor *cursor,
|
|
|
|
size_t length)
|
|
|
|
{
|
|
|
|
struct ceph_msg_data *data = cursor->data;
|
|
|
|
struct bio_vec *bvecs = data->bvec_pos.bvecs;
|
|
|
|
|
|
|
|
cursor->resid = min_t(size_t, length, data->bvec_pos.iter.bi_size);
|
|
|
|
cursor->bvec_iter = data->bvec_pos.iter;
|
|
|
|
cursor->bvec_iter.bi_size = cursor->resid;
|
|
|
|
|
|
|
|
BUG_ON(cursor->resid < bvec_iter_len(bvecs, cursor->bvec_iter));
|
|
|
|
cursor->last_piece =
|
|
|
|
cursor->resid == bvec_iter_len(bvecs, cursor->bvec_iter);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct page *ceph_msg_data_bvecs_next(struct ceph_msg_data_cursor *cursor,
|
|
|
|
size_t *page_offset,
|
|
|
|
size_t *length)
|
|
|
|
{
|
|
|
|
struct bio_vec bv = bvec_iter_bvec(cursor->data->bvec_pos.bvecs,
|
|
|
|
cursor->bvec_iter);
|
|
|
|
|
|
|
|
*page_offset = bv.bv_offset;
|
|
|
|
*length = bv.bv_len;
|
|
|
|
return bv.bv_page;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool ceph_msg_data_bvecs_advance(struct ceph_msg_data_cursor *cursor,
|
|
|
|
size_t bytes)
|
|
|
|
{
|
|
|
|
struct bio_vec *bvecs = cursor->data->bvec_pos.bvecs;
|
2019-03-23 00:14:19 +03:00
|
|
|
struct page *page = bvec_iter_page(bvecs, cursor->bvec_iter);
|
2018-01-20 12:30:11 +03:00
|
|
|
|
|
|
|
BUG_ON(bytes > cursor->resid);
|
|
|
|
BUG_ON(bytes > bvec_iter_len(bvecs, cursor->bvec_iter));
|
|
|
|
cursor->resid -= bytes;
|
|
|
|
bvec_iter_advance(bvecs, &cursor->bvec_iter, bytes);
|
|
|
|
|
|
|
|
if (!cursor->resid) {
|
|
|
|
BUG_ON(!cursor->last_piece);
|
|
|
|
return false; /* no more data */
|
|
|
|
}
|
|
|
|
|
2019-03-23 00:14:19 +03:00
|
|
|
if (!bytes || (cursor->bvec_iter.bi_bvec_done &&
|
|
|
|
page == bvec_iter_page(bvecs, cursor->bvec_iter)))
|
2018-01-20 12:30:11 +03:00
|
|
|
return false; /* more bytes to process in this segment */
|
|
|
|
|
|
|
|
BUG_ON(cursor->last_piece);
|
|
|
|
BUG_ON(cursor->resid < bvec_iter_len(bvecs, cursor->bvec_iter));
|
|
|
|
cursor->last_piece =
|
|
|
|
cursor->resid == bvec_iter_len(bvecs, cursor->bvec_iter);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-03-08 01:38:28 +04:00
|
|
|
/*
|
|
|
|
* For a page array, a piece comes from the first page in the array
|
|
|
|
* that has not already been fully consumed.
|
|
|
|
*/
|
2013-03-14 23:09:06 +04:00
|
|
|
static void ceph_msg_data_pages_cursor_init(struct ceph_msg_data_cursor *cursor,
|
2013-03-12 08:34:22 +04:00
|
|
|
size_t length)
|
2013-03-08 01:38:28 +04:00
|
|
|
{
|
2013-03-14 23:09:06 +04:00
|
|
|
struct ceph_msg_data *data = cursor->data;
|
2013-03-08 01:38:28 +04:00
|
|
|
int page_count;
|
|
|
|
|
|
|
|
BUG_ON(data->type != CEPH_MSG_DATA_PAGES);
|
|
|
|
|
|
|
|
BUG_ON(!data->pages);
|
|
|
|
BUG_ON(!data->length);
|
|
|
|
|
2013-04-05 23:46:01 +04:00
|
|
|
cursor->resid = min(length, data->length);
|
2013-03-08 01:38:28 +04:00
|
|
|
page_count = calc_pages_for(data->alignment, (u64)data->length);
|
|
|
|
cursor->page_offset = data->alignment & ~PAGE_MASK;
|
|
|
|
cursor->page_index = 0;
|
2013-03-31 08:46:55 +04:00
|
|
|
BUG_ON(page_count > (int)USHRT_MAX);
|
|
|
|
cursor->page_count = (unsigned short)page_count;
|
|
|
|
BUG_ON(length > SIZE_MAX - cursor->page_offset);
|
2014-08-08 12:43:39 +04:00
|
|
|
cursor->last_piece = cursor->page_offset + cursor->resid <= PAGE_SIZE;
|
2013-03-08 01:38:28 +04:00
|
|
|
}
|
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
static struct page *
|
|
|
|
ceph_msg_data_pages_next(struct ceph_msg_data_cursor *cursor,
|
|
|
|
size_t *page_offset, size_t *length)
|
2013-03-08 01:38:28 +04:00
|
|
|
{
|
2013-03-14 23:09:06 +04:00
|
|
|
struct ceph_msg_data *data = cursor->data;
|
2013-03-08 01:38:28 +04:00
|
|
|
|
|
|
|
BUG_ON(data->type != CEPH_MSG_DATA_PAGES);
|
|
|
|
|
|
|
|
BUG_ON(cursor->page_index >= cursor->page_count);
|
|
|
|
BUG_ON(cursor->page_offset >= PAGE_SIZE);
|
|
|
|
|
|
|
|
*page_offset = cursor->page_offset;
|
2013-03-12 08:34:22 +04:00
|
|
|
if (cursor->last_piece)
|
2013-03-08 01:38:28 +04:00
|
|
|
*length = cursor->resid;
|
2013-03-12 08:34:22 +04:00
|
|
|
else
|
2013-03-08 01:38:28 +04:00
|
|
|
*length = PAGE_SIZE - *page_offset;
|
|
|
|
|
|
|
|
return data->pages[cursor->page_index];
|
|
|
|
}
|
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
static bool ceph_msg_data_pages_advance(struct ceph_msg_data_cursor *cursor,
|
2013-03-08 01:38:28 +04:00
|
|
|
size_t bytes)
|
|
|
|
{
|
2013-03-14 23:09:06 +04:00
|
|
|
BUG_ON(cursor->data->type != CEPH_MSG_DATA_PAGES);
|
2013-03-08 01:38:28 +04:00
|
|
|
|
|
|
|
BUG_ON(cursor->page_offset + bytes > PAGE_SIZE);
|
|
|
|
|
|
|
|
/* Advance the cursor page offset */
|
|
|
|
|
|
|
|
cursor->resid -= bytes;
|
2013-03-31 00:09:59 +04:00
|
|
|
cursor->page_offset = (cursor->page_offset + bytes) & ~PAGE_MASK;
|
|
|
|
if (!bytes || cursor->page_offset)
|
2013-03-08 01:38:28 +04:00
|
|
|
return false; /* more bytes to process in the current page */
|
|
|
|
|
2014-03-23 02:50:39 +04:00
|
|
|
if (!cursor->resid)
|
|
|
|
return false; /* no more data */
|
|
|
|
|
2013-03-31 00:09:59 +04:00
|
|
|
/* Move on to the next page; offset is already at 0 */
|
2013-03-08 01:38:28 +04:00
|
|
|
|
|
|
|
BUG_ON(cursor->page_index >= cursor->page_count);
|
|
|
|
cursor->page_index++;
|
2013-03-12 08:34:22 +04:00
|
|
|
cursor->last_piece = cursor->resid <= PAGE_SIZE;
|
2013-03-08 01:38:28 +04:00
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-03-07 09:39:39 +04:00
|
|
|
/*
|
2013-03-07 09:39:39 +04:00
|
|
|
* For a pagelist, a piece is whatever remains to be consumed in the
|
|
|
|
* first page in the list, or the front of the next page.
|
2013-03-07 09:39:39 +04:00
|
|
|
*/
|
2013-03-14 23:09:06 +04:00
|
|
|
static void
|
|
|
|
ceph_msg_data_pagelist_cursor_init(struct ceph_msg_data_cursor *cursor,
|
2013-03-12 08:34:22 +04:00
|
|
|
size_t length)
|
2013-03-07 09:39:39 +04:00
|
|
|
{
|
2013-03-14 23:09:06 +04:00
|
|
|
struct ceph_msg_data *data = cursor->data;
|
2013-03-07 09:39:39 +04:00
|
|
|
struct ceph_pagelist *pagelist;
|
|
|
|
struct page *page;
|
|
|
|
|
2013-03-07 09:39:39 +04:00
|
|
|
BUG_ON(data->type != CEPH_MSG_DATA_PAGELIST);
|
2013-03-07 09:39:39 +04:00
|
|
|
|
|
|
|
pagelist = data->pagelist;
|
|
|
|
BUG_ON(!pagelist);
|
2013-03-12 08:34:22 +04:00
|
|
|
|
|
|
|
if (!length)
|
2013-03-07 09:39:39 +04:00
|
|
|
return; /* pagelist can be assigned but empty */
|
|
|
|
|
|
|
|
BUG_ON(list_empty(&pagelist->head));
|
|
|
|
page = list_first_entry(&pagelist->head, struct page, lru);
|
|
|
|
|
2013-04-05 23:46:01 +04:00
|
|
|
cursor->resid = min(length, pagelist->length);
|
2013-03-07 09:39:39 +04:00
|
|
|
cursor->page = page;
|
|
|
|
cursor->offset = 0;
|
2013-04-20 00:34:49 +04:00
|
|
|
cursor->last_piece = cursor->resid <= PAGE_SIZE;
|
2013-03-07 09:39:39 +04:00
|
|
|
}
|
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
static struct page *
|
|
|
|
ceph_msg_data_pagelist_next(struct ceph_msg_data_cursor *cursor,
|
|
|
|
size_t *page_offset, size_t *length)
|
2013-03-07 09:39:39 +04:00
|
|
|
{
|
2013-03-14 23:09:06 +04:00
|
|
|
struct ceph_msg_data *data = cursor->data;
|
2013-03-07 09:39:39 +04:00
|
|
|
struct ceph_pagelist *pagelist;
|
|
|
|
|
|
|
|
BUG_ON(data->type != CEPH_MSG_DATA_PAGELIST);
|
|
|
|
|
|
|
|
pagelist = data->pagelist;
|
|
|
|
BUG_ON(!pagelist);
|
|
|
|
|
|
|
|
BUG_ON(!cursor->page);
|
2013-03-12 08:34:22 +04:00
|
|
|
BUG_ON(cursor->offset + cursor->resid != pagelist->length);
|
2013-03-07 09:39:39 +04:00
|
|
|
|
2013-03-31 00:09:59 +04:00
|
|
|
/* offset of first page in pagelist is always 0 */
|
2013-03-07 09:39:39 +04:00
|
|
|
*page_offset = cursor->offset & ~PAGE_MASK;
|
2013-03-31 00:09:59 +04:00
|
|
|
if (cursor->last_piece)
|
2013-03-12 08:34:22 +04:00
|
|
|
*length = cursor->resid;
|
|
|
|
else
|
|
|
|
*length = PAGE_SIZE - *page_offset;
|
2013-03-07 09:39:39 +04:00
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
return cursor->page;
|
2013-03-07 09:39:39 +04:00
|
|
|
}
|
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
static bool ceph_msg_data_pagelist_advance(struct ceph_msg_data_cursor *cursor,
|
2013-03-07 09:39:39 +04:00
|
|
|
size_t bytes)
|
2013-03-07 09:39:39 +04:00
|
|
|
{
|
2013-03-14 23:09:06 +04:00
|
|
|
struct ceph_msg_data *data = cursor->data;
|
2013-03-07 09:39:39 +04:00
|
|
|
struct ceph_pagelist *pagelist;
|
|
|
|
|
|
|
|
BUG_ON(data->type != CEPH_MSG_DATA_PAGELIST);
|
|
|
|
|
|
|
|
pagelist = data->pagelist;
|
|
|
|
BUG_ON(!pagelist);
|
2013-03-12 08:34:22 +04:00
|
|
|
|
|
|
|
BUG_ON(cursor->offset + cursor->resid != pagelist->length);
|
2013-03-07 09:39:39 +04:00
|
|
|
BUG_ON((cursor->offset & ~PAGE_MASK) + bytes > PAGE_SIZE);
|
|
|
|
|
|
|
|
/* Advance the cursor offset */
|
|
|
|
|
2013-03-12 08:34:22 +04:00
|
|
|
cursor->resid -= bytes;
|
2013-03-07 09:39:39 +04:00
|
|
|
cursor->offset += bytes;
|
2013-03-31 00:09:59 +04:00
|
|
|
/* offset of first page in pagelist is always 0 */
|
2013-03-07 09:39:39 +04:00
|
|
|
if (!bytes || cursor->offset & ~PAGE_MASK)
|
|
|
|
return false; /* more bytes to process in the current page */
|
|
|
|
|
2014-03-23 02:50:39 +04:00
|
|
|
if (!cursor->resid)
|
|
|
|
return false; /* no more data */
|
|
|
|
|
2013-03-07 09:39:39 +04:00
|
|
|
/* Move on to the next page */
|
|
|
|
|
|
|
|
BUG_ON(list_is_last(&cursor->page->lru, &pagelist->head));
|
2015-11-16 16:46:32 +03:00
|
|
|
cursor->page = list_next_entry(cursor->page, lru);
|
2013-03-12 08:34:22 +04:00
|
|
|
cursor->last_piece = cursor->resid <= PAGE_SIZE;
|
2013-03-07 09:39:39 +04:00
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-03-07 09:39:39 +04:00
|
|
|
/*
|
|
|
|
* Message data is handled (sent or received) in pieces, where each
|
|
|
|
* piece resides on a single page. The network layer might not
|
|
|
|
* consume an entire piece at once. A data item's cursor keeps
|
|
|
|
* track of which piece is next to process and how much remains to
|
|
|
|
* be processed in that piece. It also tracks whether the current
|
|
|
|
* piece is the last one in the data item.
|
|
|
|
*/
|
2013-04-05 23:46:01 +04:00
|
|
|
static void __ceph_msg_data_cursor_init(struct ceph_msg_data_cursor *cursor)
|
2013-03-07 09:39:39 +04:00
|
|
|
{
|
2013-04-05 23:46:01 +04:00
|
|
|
size_t length = cursor->total_resid;
|
2013-03-14 23:09:06 +04:00
|
|
|
|
|
|
|
switch (cursor->data->type) {
|
2013-03-07 09:39:39 +04:00
|
|
|
case CEPH_MSG_DATA_PAGELIST:
|
2013-03-14 23:09:06 +04:00
|
|
|
ceph_msg_data_pagelist_cursor_init(cursor, length);
|
2013-03-07 09:39:39 +04:00
|
|
|
break;
|
2013-03-08 01:38:28 +04:00
|
|
|
case CEPH_MSG_DATA_PAGES:
|
2013-03-14 23:09:06 +04:00
|
|
|
ceph_msg_data_pages_cursor_init(cursor, length);
|
2013-03-08 01:38:28 +04:00
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
case CEPH_MSG_DATA_BIO:
|
2013-03-14 23:09:06 +04:00
|
|
|
ceph_msg_data_bio_cursor_init(cursor, length);
|
2013-03-07 09:39:39 +04:00
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2018-01-20 12:30:11 +03:00
|
|
|
case CEPH_MSG_DATA_BVECS:
|
|
|
|
ceph_msg_data_bvecs_cursor_init(cursor, length);
|
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
case CEPH_MSG_DATA_NONE:
|
2013-03-07 09:39:39 +04:00
|
|
|
default:
|
|
|
|
/* BUG(); */
|
|
|
|
break;
|
|
|
|
}
|
2013-03-14 23:09:06 +04:00
|
|
|
cursor->need_crc = true;
|
2013-03-07 09:39:39 +04:00
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_msg_data_cursor_init(struct ceph_msg_data_cursor *cursor,
|
|
|
|
struct ceph_msg *msg, size_t length)
|
2013-04-05 23:46:01 +04:00
|
|
|
{
|
|
|
|
BUG_ON(!length);
|
|
|
|
BUG_ON(length > msg->data_length);
|
2018-10-15 18:38:23 +03:00
|
|
|
BUG_ON(!msg->num_data_items);
|
2013-04-05 23:46:01 +04:00
|
|
|
|
|
|
|
cursor->total_resid = length;
|
2018-10-15 18:38:23 +03:00
|
|
|
cursor->data = msg->data;
|
2013-04-05 23:46:01 +04:00
|
|
|
|
|
|
|
__ceph_msg_data_cursor_init(cursor);
|
|
|
|
}
|
|
|
|
|
2013-03-07 09:39:39 +04:00
|
|
|
/*
|
|
|
|
* Return the page containing the next piece to process for a given
|
|
|
|
* data item, and supply the page offset and length of that piece.
|
|
|
|
* Indicate whether this is the last piece in this data item.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
struct page *ceph_msg_data_next(struct ceph_msg_data_cursor *cursor,
|
|
|
|
size_t *page_offset, size_t *length,
|
|
|
|
bool *last_piece)
|
2013-03-07 09:39:39 +04:00
|
|
|
{
|
|
|
|
struct page *page;
|
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
switch (cursor->data->type) {
|
2013-03-07 09:39:39 +04:00
|
|
|
case CEPH_MSG_DATA_PAGELIST:
|
2013-03-14 23:09:06 +04:00
|
|
|
page = ceph_msg_data_pagelist_next(cursor, page_offset, length);
|
2013-03-07 09:39:39 +04:00
|
|
|
break;
|
2013-03-08 01:38:28 +04:00
|
|
|
case CEPH_MSG_DATA_PAGES:
|
2013-03-14 23:09:06 +04:00
|
|
|
page = ceph_msg_data_pages_next(cursor, page_offset, length);
|
2013-03-08 01:38:28 +04:00
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
case CEPH_MSG_DATA_BIO:
|
2013-03-14 23:09:06 +04:00
|
|
|
page = ceph_msg_data_bio_next(cursor, page_offset, length);
|
2013-03-07 09:39:39 +04:00
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2018-01-20 12:30:11 +03:00
|
|
|
case CEPH_MSG_DATA_BVECS:
|
|
|
|
page = ceph_msg_data_bvecs_next(cursor, page_offset, length);
|
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
case CEPH_MSG_DATA_NONE:
|
2013-03-07 09:39:39 +04:00
|
|
|
default:
|
|
|
|
page = NULL;
|
|
|
|
break;
|
|
|
|
}
|
2018-01-20 12:30:10 +03:00
|
|
|
|
2013-03-07 09:39:39 +04:00
|
|
|
BUG_ON(!page);
|
|
|
|
BUG_ON(*page_offset + *length > PAGE_SIZE);
|
|
|
|
BUG_ON(!*length);
|
2018-01-20 12:30:10 +03:00
|
|
|
BUG_ON(*length > cursor->resid);
|
2013-03-07 09:39:39 +04:00
|
|
|
if (last_piece)
|
2013-03-14 23:09:06 +04:00
|
|
|
*last_piece = cursor->last_piece;
|
2013-03-07 09:39:39 +04:00
|
|
|
|
|
|
|
return page;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns true if the result moves the cursor on to the next piece
|
|
|
|
* of the data item.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_msg_data_advance(struct ceph_msg_data_cursor *cursor, size_t bytes)
|
2013-03-07 09:39:39 +04:00
|
|
|
{
|
|
|
|
bool new_piece;
|
|
|
|
|
2013-03-12 08:34:22 +04:00
|
|
|
BUG_ON(bytes > cursor->resid);
|
2013-03-14 23:09:06 +04:00
|
|
|
switch (cursor->data->type) {
|
2013-03-07 09:39:39 +04:00
|
|
|
case CEPH_MSG_DATA_PAGELIST:
|
2013-03-14 23:09:06 +04:00
|
|
|
new_piece = ceph_msg_data_pagelist_advance(cursor, bytes);
|
2013-03-07 09:39:39 +04:00
|
|
|
break;
|
2013-03-08 01:38:28 +04:00
|
|
|
case CEPH_MSG_DATA_PAGES:
|
2013-03-14 23:09:06 +04:00
|
|
|
new_piece = ceph_msg_data_pages_advance(cursor, bytes);
|
2013-03-08 01:38:28 +04:00
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
case CEPH_MSG_DATA_BIO:
|
2013-03-14 23:09:06 +04:00
|
|
|
new_piece = ceph_msg_data_bio_advance(cursor, bytes);
|
2013-03-07 09:39:39 +04:00
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2018-01-20 12:30:11 +03:00
|
|
|
case CEPH_MSG_DATA_BVECS:
|
|
|
|
new_piece = ceph_msg_data_bvecs_advance(cursor, bytes);
|
|
|
|
break;
|
2013-03-07 09:39:39 +04:00
|
|
|
case CEPH_MSG_DATA_NONE:
|
2013-03-07 09:39:39 +04:00
|
|
|
default:
|
|
|
|
BUG();
|
|
|
|
break;
|
|
|
|
}
|
2013-04-05 23:46:01 +04:00
|
|
|
cursor->total_resid -= bytes;
|
2013-03-07 09:39:39 +04:00
|
|
|
|
2013-04-05 23:46:01 +04:00
|
|
|
if (!cursor->resid && cursor->total_resid) {
|
|
|
|
WARN_ON(!cursor->last_piece);
|
2018-10-15 18:38:23 +03:00
|
|
|
cursor->data++;
|
2013-04-05 23:46:01 +04:00
|
|
|
__ceph_msg_data_cursor_init(cursor);
|
2013-04-20 00:34:49 +04:00
|
|
|
new_piece = true;
|
2013-04-05 23:46:01 +04:00
|
|
|
}
|
2013-04-20 00:34:49 +04:00
|
|
|
cursor->need_crc = new_piece;
|
2013-03-07 09:39:39 +04:00
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
u32 ceph_crc32c_page(u32 crc, struct page *page, unsigned int page_offset,
|
|
|
|
unsigned int length)
|
2013-03-09 06:59:00 +04:00
|
|
|
{
|
|
|
|
char *kaddr;
|
|
|
|
|
|
|
|
kaddr = kmap(page);
|
|
|
|
BUG_ON(kaddr == NULL);
|
|
|
|
crc = crc32c(crc, kaddr + page_offset, length);
|
|
|
|
kunmap(page);
|
|
|
|
|
|
|
|
return crc;
|
|
|
|
}
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
bool ceph_addr_is_blank(const struct ceph_entity_addr *addr)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2019-05-06 16:38:46 +03:00
|
|
|
struct sockaddr_storage ss = addr->in_addr; /* align */
|
|
|
|
struct in_addr *addr4 = &((struct sockaddr_in *)&ss)->sin_addr;
|
|
|
|
struct in6_addr *addr6 = &((struct sockaddr_in6 *)&ss)->sin6_addr;
|
2015-07-09 13:57:52 +03:00
|
|
|
|
2019-05-06 16:38:46 +03:00
|
|
|
switch (ss.ss_family) {
|
2009-10-06 22:31:13 +04:00
|
|
|
case AF_INET:
|
2019-05-06 16:38:46 +03:00
|
|
|
return addr4->s_addr == htonl(INADDR_ANY);
|
2009-10-06 22:31:13 +04:00
|
|
|
case AF_INET6:
|
2015-07-09 13:57:52 +03:00
|
|
|
return ipv6_addr_any(addr6);
|
|
|
|
default:
|
|
|
|
return true;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
int ceph_addr_port(const struct ceph_entity_addr *addr)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2019-05-06 16:38:46 +03:00
|
|
|
switch (get_unaligned(&addr->in_addr.ss_family)) {
|
2009-10-06 22:31:13 +04:00
|
|
|
case AF_INET:
|
2019-05-06 16:38:46 +03:00
|
|
|
return ntohs(get_unaligned(&((struct sockaddr_in *)&addr->in_addr)->sin_port));
|
2009-10-06 22:31:13 +04:00
|
|
|
case AF_INET6:
|
2019-05-06 16:38:46 +03:00
|
|
|
return ntohs(get_unaligned(&((struct sockaddr_in6 *)&addr->in_addr)->sin6_port));
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_addr_set_port(struct ceph_entity_addr *addr, int p)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2019-05-06 16:38:46 +03:00
|
|
|
switch (get_unaligned(&addr->in_addr.ss_family)) {
|
2009-10-06 22:31:13 +04:00
|
|
|
case AF_INET:
|
2019-05-06 16:38:46 +03:00
|
|
|
put_unaligned(htons(p), &((struct sockaddr_in *)&addr->in_addr)->sin_port);
|
2011-05-13 02:34:24 +04:00
|
|
|
break;
|
2009-10-06 22:31:13 +04:00
|
|
|
case AF_INET6:
|
2019-05-06 16:38:46 +03:00
|
|
|
put_unaligned(htons(p), &((struct sockaddr_in6 *)&addr->in_addr)->sin6_port);
|
2011-05-13 02:34:24 +04:00
|
|
|
break;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-09-23 22:48:42 +04:00
|
|
|
/*
|
|
|
|
* Unlike other *_pton function semantics, zero indicates success.
|
|
|
|
*/
|
2019-05-06 16:38:46 +03:00
|
|
|
static int ceph_pton(const char *str, size_t len, struct ceph_entity_addr *addr,
|
2011-09-23 22:48:42 +04:00
|
|
|
char delim, const char **ipend)
|
|
|
|
{
|
2019-05-06 16:38:46 +03:00
|
|
|
memset(&addr->in_addr, 0, sizeof(addr->in_addr));
|
2011-09-23 22:48:42 +04:00
|
|
|
|
2019-05-06 16:38:46 +03:00
|
|
|
if (in4_pton(str, len, (u8 *)&((struct sockaddr_in *)&addr->in_addr)->sin_addr.s_addr, delim, ipend)) {
|
|
|
|
put_unaligned(AF_INET, &addr->in_addr.ss_family);
|
2011-09-23 22:48:42 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-05-06 16:38:46 +03:00
|
|
|
if (in6_pton(str, len, (u8 *)&((struct sockaddr_in6 *)&addr->in_addr)->sin6_addr.s6_addr, delim, ipend)) {
|
|
|
|
put_unaligned(AF_INET6, &addr->in_addr.ss_family);
|
2011-09-23 22:48:42 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Extract hostname string and resolve using kernel DNS facility.
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_CEPH_LIB_USE_DNS_RESOLVER
|
|
|
|
static int ceph_dns_resolve_name(const char *name, size_t namelen,
|
2019-05-06 16:38:46 +03:00
|
|
|
struct ceph_entity_addr *addr, char delim, const char **ipend)
|
2011-09-23 22:48:42 +04:00
|
|
|
{
|
|
|
|
const char *end, *delim_p;
|
|
|
|
char *colon_p, *ip_addr = NULL;
|
|
|
|
int ip_len, ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The end of the hostname occurs immediately preceding the delimiter or
|
|
|
|
* the port marker (':') where the delimiter takes precedence.
|
|
|
|
*/
|
|
|
|
delim_p = memchr(name, delim, namelen);
|
|
|
|
colon_p = memchr(name, ':', namelen);
|
|
|
|
|
|
|
|
if (delim_p && colon_p)
|
|
|
|
end = delim_p < colon_p ? delim_p : colon_p;
|
|
|
|
else if (!delim_p && colon_p)
|
|
|
|
end = colon_p;
|
|
|
|
else {
|
|
|
|
end = delim_p;
|
|
|
|
if (!end) /* case: hostname:/ */
|
|
|
|
end = name + namelen;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (end <= name)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/* do dns_resolve upcall */
|
2019-06-26 23:02:33 +03:00
|
|
|
ip_len = dns_query(current->nsproxy->net_ns,
|
|
|
|
NULL, name, end - name, NULL, &ip_addr, NULL, false);
|
2011-09-23 22:48:42 +04:00
|
|
|
if (ip_len > 0)
|
2019-05-06 16:38:46 +03:00
|
|
|
ret = ceph_pton(ip_addr, ip_len, addr, -1, NULL);
|
2011-09-23 22:48:42 +04:00
|
|
|
else
|
|
|
|
ret = -ESRCH;
|
|
|
|
|
|
|
|
kfree(ip_addr);
|
|
|
|
|
|
|
|
*ipend = end;
|
|
|
|
|
|
|
|
pr_info("resolve '%.*s' (ret=%d): %s\n", (int)(end - name), name,
|
2019-05-06 16:38:47 +03:00
|
|
|
ret, ret ? "failed" : ceph_pr_addr(addr));
|
2011-09-23 22:48:42 +04:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline int ceph_dns_resolve_name(const char *name, size_t namelen,
|
2019-05-06 16:38:46 +03:00
|
|
|
struct ceph_entity_addr *addr, char delim, const char **ipend)
|
2011-09-23 22:48:42 +04:00
|
|
|
{
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Parse a server name (IP or hostname). If a valid IP address is not found
|
|
|
|
* then try to extract a hostname to resolve using userspace DNS upcall.
|
|
|
|
*/
|
|
|
|
static int ceph_parse_server_name(const char *name, size_t namelen,
|
2019-05-06 16:38:46 +03:00
|
|
|
struct ceph_entity_addr *addr, char delim, const char **ipend)
|
2011-09-23 22:48:42 +04:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2019-05-06 16:38:46 +03:00
|
|
|
ret = ceph_pton(name, namelen, addr, delim, ipend);
|
2011-09-23 22:48:42 +04:00
|
|
|
if (ret)
|
2019-05-06 16:38:46 +03:00
|
|
|
ret = ceph_dns_resolve_name(name, namelen, addr, delim, ipend);
|
2011-09-23 22:48:42 +04:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* Parse an ip[:port] list into an addr array. Use the default
|
|
|
|
* monitor port if a port isn't specified.
|
|
|
|
*/
|
|
|
|
int ceph_parse_ips(const char *c, const char *end,
|
|
|
|
struct ceph_entity_addr *addr,
|
2021-07-14 13:05:50 +03:00
|
|
|
int max_count, int *count, char delim)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2011-09-23 22:48:42 +04:00
|
|
|
int i, ret = -EINVAL;
|
2009-10-06 22:31:13 +04:00
|
|
|
const char *p = c;
|
|
|
|
|
|
|
|
dout("parse_ips on '%.*s'\n", (int)(end-c), c);
|
|
|
|
for (i = 0; i < max_count; i++) {
|
2021-07-14 13:05:50 +03:00
|
|
|
char cur_delim = delim;
|
2009-10-06 22:31:13 +04:00
|
|
|
const char *ipend;
|
|
|
|
int port;
|
2010-07-08 20:54:52 +04:00
|
|
|
|
|
|
|
if (*p == '[') {
|
2021-07-14 13:05:50 +03:00
|
|
|
cur_delim = ']';
|
2010-07-08 20:54:52 +04:00
|
|
|
p++;
|
|
|
|
}
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2021-07-14 13:05:50 +03:00
|
|
|
ret = ceph_parse_server_name(p, end - p, &addr[i], cur_delim,
|
|
|
|
&ipend);
|
2011-09-23 22:48:42 +04:00
|
|
|
if (ret)
|
2009-10-06 22:31:13 +04:00
|
|
|
goto bad;
|
2011-09-23 22:48:42 +04:00
|
|
|
ret = -EINVAL;
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
p = ipend;
|
|
|
|
|
2021-07-14 13:05:50 +03:00
|
|
|
if (cur_delim == ']') {
|
2010-07-08 20:54:52 +04:00
|
|
|
if (*p != ']') {
|
|
|
|
dout("missing matching ']'\n");
|
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
p++;
|
|
|
|
}
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/* port? */
|
|
|
|
if (p < end && *p == ':') {
|
|
|
|
port = 0;
|
|
|
|
p++;
|
|
|
|
while (p < end && *p >= '0' && *p <= '9') {
|
|
|
|
port = (port * 10) + (*p - '0');
|
|
|
|
p++;
|
|
|
|
}
|
2013-12-30 21:21:29 +04:00
|
|
|
if (port == 0)
|
|
|
|
port = CEPH_MON_PORT;
|
|
|
|
else if (port > 65535)
|
2009-10-06 22:31:13 +04:00
|
|
|
goto bad;
|
|
|
|
} else {
|
|
|
|
port = CEPH_MON_PORT;
|
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_addr_set_port(&addr[i], port);
|
2020-11-19 18:59:08 +03:00
|
|
|
/*
|
|
|
|
* We want the type to be set according to ms_mode
|
|
|
|
* option, but options are normally parsed after mon
|
|
|
|
* addresses. Rather than complicating parsing, set
|
|
|
|
* to LEGACY and override in build_initial_monmap()
|
|
|
|
* for mon addresses and ceph_messenger_init() for
|
|
|
|
* ip option.
|
|
|
|
*/
|
2019-06-17 13:57:25 +03:00
|
|
|
addr[i].type = CEPH_ENTITY_ADDR_TYPE_LEGACY;
|
2020-11-19 18:59:08 +03:00
|
|
|
addr[i].nonce = 0;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2021-07-14 13:05:50 +03:00
|
|
|
dout("%s got %s\n", __func__, ceph_pr_addr(&addr[i]));
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
if (p == end)
|
|
|
|
break;
|
2021-07-14 13:05:50 +03:00
|
|
|
if (*p != delim)
|
2009-10-06 22:31:13 +04:00
|
|
|
goto bad;
|
|
|
|
p++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (p != end)
|
|
|
|
goto bad;
|
|
|
|
|
|
|
|
if (count)
|
|
|
|
*count = i + 1;
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
bad:
|
2011-09-23 22:48:42 +04:00
|
|
|
return ret;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Process message. This happens in the worker thread. The callback should
|
|
|
|
* be careful not to do anything that waits on other incoming messages or it
|
|
|
|
* may deadlock.
|
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_con_process_message(struct ceph_connection *con)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2015-11-02 19:13:58 +03:00
|
|
|
struct ceph_msg *msg = con->in_msg;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2012-06-01 23:56:43 +04:00
|
|
|
BUG_ON(con->in_msg->con != con);
|
2009-10-06 22:31:13 +04:00
|
|
|
con->in_msg = NULL;
|
|
|
|
|
|
|
|
/* if first message, set peer_name */
|
|
|
|
if (con->peer_name.type == 0)
|
2010-03-26 01:45:38 +03:00
|
|
|
con->peer_name = msg->hdr.src;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
con->in_seq++;
|
2009-12-22 21:43:42 +03:00
|
|
|
mutex_unlock(&con->mutex);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2020-11-05 17:01:53 +03:00
|
|
|
dout("===== %p %llu from %s%lld %d=%s len %d+%d+%d (%u %u %u) =====\n",
|
2009-10-06 22:31:13 +04:00
|
|
|
msg, le64_to_cpu(msg->hdr.seq),
|
2010-03-26 01:45:38 +03:00
|
|
|
ENTITY_NAME(msg->hdr.src),
|
2009-10-06 22:31:13 +04:00
|
|
|
le16_to_cpu(msg->hdr.type),
|
|
|
|
ceph_msg_type_name(le16_to_cpu(msg->hdr.type)),
|
|
|
|
le32_to_cpu(msg->hdr.front_len),
|
2020-11-05 17:01:53 +03:00
|
|
|
le32_to_cpu(msg->hdr.middle_len),
|
2009-10-06 22:31:13 +04:00
|
|
|
le32_to_cpu(msg->hdr.data_len),
|
|
|
|
con->in_front_crc, con->in_middle_crc, con->in_data_crc);
|
|
|
|
con->ops->dispatch(con, msg);
|
2009-12-22 21:43:42 +03:00
|
|
|
|
|
|
|
mutex_lock(&con->mutex);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2012-10-09 07:37:30 +04:00
|
|
|
* Atomically queue work on a connection after the specified delay.
|
|
|
|
* Bump @con reference to avoid races with connection teardown.
|
|
|
|
* Returns 0 if work was queued, or an error code otherwise.
|
2009-10-06 22:31:13 +04:00
|
|
|
*/
|
2012-10-09 07:37:30 +04:00
|
|
|
static int queue_con_delay(struct ceph_connection *con, unsigned long delay)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
|
|
|
if (!con->ops->get(con)) {
|
2012-10-09 07:37:30 +04:00
|
|
|
dout("%s %p ref count 0\n", __func__, con);
|
|
|
|
return -ENOENT;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
2020-10-29 16:49:10 +03:00
|
|
|
if (delay >= HZ)
|
|
|
|
delay = round_jiffies_relative(delay);
|
|
|
|
|
2020-10-02 15:15:59 +03:00
|
|
|
dout("%s %p %lu\n", __func__, con, delay);
|
2012-10-09 07:37:30 +04:00
|
|
|
if (!queue_delayed_work(ceph_msgr_wq, &con->work, delay)) {
|
|
|
|
dout("%s %p - already queued\n", __func__, con);
|
2009-10-06 22:31:13 +04:00
|
|
|
con->ops->put(con);
|
2012-10-09 07:37:30 +04:00
|
|
|
return -EBUSY;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
2012-10-09 07:37:30 +04:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void queue_con(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
(void) queue_con_delay(con, 0);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
2014-06-24 16:21:45 +04:00
|
|
|
static void cancel_con(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
if (cancel_delayed_work(&con->work)) {
|
|
|
|
dout("%s %p\n", __func__, con);
|
|
|
|
con->ops->put(con);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-12-08 05:50:07 +04:00
|
|
|
static bool con_sock_closed(struct ceph_connection *con)
|
|
|
|
{
|
2020-11-09 18:29:47 +03:00
|
|
|
if (!ceph_con_flag_test_and_clear(con, CEPH_CON_F_SOCK_CLOSED))
|
2012-12-08 05:50:07 +04:00
|
|
|
return false;
|
|
|
|
|
|
|
|
#define CASE(x) \
|
2020-11-09 16:59:02 +03:00
|
|
|
case CEPH_CON_S_ ## x: \
|
2012-12-08 05:50:07 +04:00
|
|
|
con->error_msg = "socket closed (con state " #x ")"; \
|
|
|
|
break;
|
|
|
|
|
|
|
|
switch (con->state) {
|
|
|
|
CASE(CLOSED);
|
|
|
|
CASE(PREOPEN);
|
2020-11-09 16:59:02 +03:00
|
|
|
CASE(V1_BANNER);
|
|
|
|
CASE(V1_CONNECT_MSG);
|
2020-11-19 18:59:08 +03:00
|
|
|
CASE(V2_BANNER_PREFIX);
|
|
|
|
CASE(V2_BANNER_PAYLOAD);
|
|
|
|
CASE(V2_HELLO);
|
|
|
|
CASE(V2_AUTH);
|
|
|
|
CASE(V2_AUTH_SIGNATURE);
|
|
|
|
CASE(V2_SESSION_CONNECT);
|
|
|
|
CASE(V2_SESSION_RECONNECT);
|
2012-12-08 05:50:07 +04:00
|
|
|
CASE(OPEN);
|
|
|
|
CASE(STANDBY);
|
|
|
|
default:
|
|
|
|
BUG();
|
|
|
|
}
|
|
|
|
#undef CASE
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-02-19 22:25:57 +04:00
|
|
|
static bool con_backoff(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
if (!ceph_con_flag_test_and_clear(con, CEPH_CON_F_BACKOFF))
|
2013-02-19 22:25:57 +04:00
|
|
|
return false;
|
|
|
|
|
2020-10-29 16:49:10 +03:00
|
|
|
ret = queue_con_delay(con, con->delay);
|
2013-02-19 22:25:57 +04:00
|
|
|
if (ret) {
|
|
|
|
dout("%s: con %p FAILED to back off %lu\n", __func__,
|
|
|
|
con, con->delay);
|
|
|
|
BUG_ON(ret == -ENOENT);
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_con_flag_set(con, CEPH_CON_F_BACKOFF);
|
2013-02-19 22:25:57 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2013-02-19 22:25:57 +04:00
|
|
|
/* Finish fault handling; con->mutex must *not* be held here */
|
|
|
|
|
|
|
|
static void con_fault_finish(struct ceph_connection *con)
|
|
|
|
{
|
2016-01-13 16:32:57 +03:00
|
|
|
dout("%s %p\n", __func__, con);
|
|
|
|
|
2013-02-19 22:25:57 +04:00
|
|
|
/*
|
|
|
|
* in case we faulted due to authentication, invalidate our
|
|
|
|
* current tickets so that we can get new ones.
|
|
|
|
*/
|
2020-11-12 18:31:41 +03:00
|
|
|
if (con->v1.auth_retry) {
|
|
|
|
dout("auth_retry %d, invalidating\n", con->v1.auth_retry);
|
2016-01-13 16:32:57 +03:00
|
|
|
if (con->ops->invalidate_authorizer)
|
|
|
|
con->ops->invalidate_authorizer(con);
|
2020-11-12 18:31:41 +03:00
|
|
|
con->v1.auth_retry = 0;
|
2013-02-19 22:25:57 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
if (con->ops->fault)
|
|
|
|
con->ops->fault(con);
|
|
|
|
}
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* Do some work on a connection. Drop a connection ref when we're done.
|
|
|
|
*/
|
2015-07-03 15:44:41 +03:00
|
|
|
static void ceph_con_workfn(struct work_struct *work)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
|
|
|
struct ceph_connection *con = container_of(work, struct ceph_connection,
|
|
|
|
work.work);
|
2013-02-19 22:25:57 +04:00
|
|
|
bool fault;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2010-04-29 00:51:50 +04:00
|
|
|
mutex_lock(&con->mutex);
|
2013-02-19 22:25:57 +04:00
|
|
|
while (true) {
|
|
|
|
int ret;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2013-02-19 22:25:57 +04:00
|
|
|
if ((fault = con_sock_closed(con))) {
|
|
|
|
dout("%s: con %p SOCK_CLOSED\n", __func__, con);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (con_backoff(con)) {
|
|
|
|
dout("%s: con %p BACKOFF\n", __func__, con);
|
|
|
|
break;
|
|
|
|
}
|
2020-11-09 16:59:02 +03:00
|
|
|
if (con->state == CEPH_CON_S_STANDBY) {
|
2013-02-19 22:25:57 +04:00
|
|
|
dout("%s: con %p STANDBY\n", __func__, con);
|
|
|
|
break;
|
|
|
|
}
|
2020-11-09 16:59:02 +03:00
|
|
|
if (con->state == CEPH_CON_S_CLOSED) {
|
2013-02-19 22:25:57 +04:00
|
|
|
dout("%s: con %p CLOSED\n", __func__, con);
|
|
|
|
BUG_ON(con->sock);
|
|
|
|
break;
|
|
|
|
}
|
2020-11-09 16:59:02 +03:00
|
|
|
if (con->state == CEPH_CON_S_PREOPEN) {
|
2013-02-19 22:25:57 +04:00
|
|
|
dout("%s: con %p PREOPEN\n", __func__, con);
|
|
|
|
BUG_ON(con->sock);
|
|
|
|
}
|
2011-05-19 22:21:05 +04:00
|
|
|
|
2020-11-19 18:59:08 +03:00
|
|
|
if (ceph_msgr2(from_msgr(con->msgr)))
|
|
|
|
ret = ceph_con_v2_try_read(con);
|
|
|
|
else
|
|
|
|
ret = ceph_con_v1_try_read(con);
|
2013-02-19 22:25:57 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
if (ret == -EAGAIN)
|
|
|
|
continue;
|
2015-03-23 14:52:40 +03:00
|
|
|
if (!con->error_msg)
|
|
|
|
con->error_msg = "socket error on read";
|
2013-02-19 22:25:57 +04:00
|
|
|
fault = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2020-11-19 18:59:08 +03:00
|
|
|
if (ceph_msgr2(from_msgr(con->msgr)))
|
|
|
|
ret = ceph_con_v2_try_write(con);
|
|
|
|
else
|
|
|
|
ret = ceph_con_v1_try_write(con);
|
2013-02-19 22:25:57 +04:00
|
|
|
if (ret < 0) {
|
|
|
|
if (ret == -EAGAIN)
|
|
|
|
continue;
|
2015-03-23 14:52:40 +03:00
|
|
|
if (!con->error_msg)
|
|
|
|
con->error_msg = "socket error on write";
|
2013-02-19 22:25:57 +04:00
|
|
|
fault = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
break; /* If we make it to here, we're done */
|
2012-07-31 03:24:21 +04:00
|
|
|
}
|
2013-02-19 22:25:57 +04:00
|
|
|
if (fault)
|
|
|
|
con_fault(con);
|
2010-04-29 00:51:50 +04:00
|
|
|
mutex_unlock(&con->mutex);
|
2011-05-19 22:21:05 +04:00
|
|
|
|
2013-02-19 22:25:57 +04:00
|
|
|
if (fault)
|
|
|
|
con_fault_finish(con);
|
|
|
|
|
|
|
|
con->ops->put(con);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Generic error/fault handler. A retry mechanism is used with
|
|
|
|
* exponential backoff
|
|
|
|
*/
|
2013-02-19 22:25:57 +04:00
|
|
|
static void con_fault(struct ceph_connection *con)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2020-11-09 16:11:26 +03:00
|
|
|
dout("fault %p state %d to peer %s\n",
|
2019-05-06 16:38:47 +03:00
|
|
|
con, con->state, ceph_pr_addr(&con->peer_addr));
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2015-03-23 14:52:40 +03:00
|
|
|
pr_warn("%s%lld %s %s\n", ENTITY_NAME(con->peer_name),
|
2019-05-06 16:38:47 +03:00
|
|
|
ceph_pr_addr(&con->peer_addr), con->error_msg);
|
2015-03-23 14:52:40 +03:00
|
|
|
con->error_msg = NULL;
|
|
|
|
|
2020-11-19 18:59:08 +03:00
|
|
|
WARN_ON(con->state == CEPH_CON_S_STANDBY ||
|
|
|
|
con->state == CEPH_CON_S_CLOSED);
|
2009-12-22 21:43:42 +03:00
|
|
|
|
2020-11-06 19:07:07 +03:00
|
|
|
ceph_con_reset_protocol(con);
|
2009-12-15 01:30:34 +03:00
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
if (ceph_con_flag_test(con, CEPH_CON_F_LOSSYTX)) {
|
2012-07-21 04:24:40 +04:00
|
|
|
dout("fault on LOSSYTX channel, marking CLOSED\n");
|
2020-11-09 16:59:02 +03:00
|
|
|
con->state = CEPH_CON_S_CLOSED;
|
2013-02-19 22:25:57 +04:00
|
|
|
return;
|
2012-07-21 02:22:53 +04:00
|
|
|
}
|
|
|
|
|
2010-02-25 23:40:45 +03:00
|
|
|
/* Requeue anything that hasn't been acked */
|
|
|
|
list_splice_init(&con->out_sent, &con->out_queue);
|
2010-02-03 03:21:06 +03:00
|
|
|
|
2011-03-03 21:10:15 +03:00
|
|
|
/* If there are no messages queued or keepalive pending, place
|
|
|
|
* the connection in a STANDBY state */
|
|
|
|
if (list_empty(&con->out_queue) &&
|
2020-11-09 18:29:47 +03:00
|
|
|
!ceph_con_flag_test(con, CEPH_CON_F_KEEPALIVE_PENDING)) {
|
2011-03-04 23:25:05 +03:00
|
|
|
dout("fault %p setting STANDBY clearing WRITE_PENDING\n", con);
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_con_flag_clear(con, CEPH_CON_F_WRITE_PENDING);
|
2020-11-09 16:59:02 +03:00
|
|
|
con->state = CEPH_CON_S_STANDBY;
|
2010-02-25 23:40:45 +03:00
|
|
|
} else {
|
|
|
|
/* retry after a delay. */
|
2020-11-09 16:59:02 +03:00
|
|
|
con->state = CEPH_CON_S_PREOPEN;
|
2020-10-29 16:49:10 +03:00
|
|
|
if (!con->delay) {
|
2010-02-25 23:40:45 +03:00
|
|
|
con->delay = BASE_DELAY_INTERVAL;
|
2020-10-29 16:49:10 +03:00
|
|
|
} else if (con->delay < MAX_DELAY_INTERVAL) {
|
2010-02-25 23:40:45 +03:00
|
|
|
con->delay *= 2;
|
2020-10-29 16:49:10 +03:00
|
|
|
if (con->delay > MAX_DELAY_INTERVAL)
|
|
|
|
con->delay = MAX_DELAY_INTERVAL;
|
|
|
|
}
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_con_flag_set(con, CEPH_CON_F_BACKOFF);
|
2012-10-09 07:37:30 +04:00
|
|
|
queue_con(con);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-07-25 15:16:39 +03:00
|
|
|
void ceph_messenger_reset_nonce(struct ceph_messenger *msgr)
|
|
|
|
{
|
|
|
|
u32 nonce = le32_to_cpu(msgr->inst.addr.nonce) + 1000000;
|
|
|
|
msgr->inst.addr.nonce = cpu_to_le32(nonce);
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_encode_my_addr(msgr);
|
2019-07-25 15:16:39 +03:00
|
|
|
}
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/*
|
2012-05-27 08:26:43 +04:00
|
|
|
* initialize a new messenger instance
|
2009-10-06 22:31:13 +04:00
|
|
|
*/
|
2012-05-27 08:26:43 +04:00
|
|
|
void ceph_messenger_init(struct ceph_messenger *msgr,
|
2015-10-29 01:50:58 +03:00
|
|
|
struct ceph_entity_addr *myaddr)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
|
|
|
spin_lock_init(&msgr->global_seq_lock);
|
|
|
|
|
2020-11-05 20:48:04 +03:00
|
|
|
if (myaddr) {
|
|
|
|
memcpy(&msgr->inst.addr.in_addr, &myaddr->in_addr,
|
|
|
|
sizeof(msgr->inst.addr.in_addr));
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_addr_set_port(&msgr->inst.addr, 0);
|
2020-11-05 20:48:04 +03:00
|
|
|
}
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2020-11-19 18:59:08 +03:00
|
|
|
/*
|
|
|
|
* Since nautilus, clients are identified using type ANY.
|
|
|
|
* For msgr1, ceph_encode_banner_addr() munges it to NONE.
|
|
|
|
*/
|
|
|
|
msgr->inst.addr.type = CEPH_ENTITY_ADDR_TYPE_ANY;
|
2020-11-05 20:48:04 +03:00
|
|
|
|
|
|
|
/* generate a random non-zero nonce */
|
|
|
|
do {
|
|
|
|
get_random_bytes(&msgr->inst.addr.nonce,
|
|
|
|
sizeof(msgr->inst.addr.nonce));
|
|
|
|
} while (!msgr->inst.addr.nonce);
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_encode_my_addr(msgr);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2012-07-09 06:50:33 +04:00
|
|
|
atomic_set(&msgr->stopping, 0);
|
2015-06-25 17:47:45 +03:00
|
|
|
write_pnet(&msgr->net, get_net(current->nsproxy->net_ns));
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2012-05-27 08:26:43 +04:00
|
|
|
dout("%s %p\n", __func__, msgr);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
2015-06-25 17:47:45 +03:00
|
|
|
void ceph_messenger_fini(struct ceph_messenger *msgr)
|
|
|
|
{
|
|
|
|
put_net(read_pnet(&msgr->net));
|
|
|
|
}
|
|
|
|
|
2015-11-02 19:13:58 +03:00
|
|
|
static void msg_con_set(struct ceph_msg *msg, struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
if (msg->con)
|
|
|
|
msg->con->ops->put(msg->con);
|
|
|
|
|
|
|
|
msg->con = con ? con->ops->get(con) : NULL;
|
|
|
|
BUG_ON(msg->con != con);
|
|
|
|
}
|
|
|
|
|
2011-03-04 23:25:05 +03:00
|
|
|
static void clear_standby(struct ceph_connection *con)
|
|
|
|
{
|
|
|
|
/* come back from STANDBY? */
|
2020-11-09 16:59:02 +03:00
|
|
|
if (con->state == CEPH_CON_S_STANDBY) {
|
2011-03-04 23:25:05 +03:00
|
|
|
dout("clear_standby %p and ++connect_seq\n", con);
|
2020-11-09 16:59:02 +03:00
|
|
|
con->state = CEPH_CON_S_PREOPEN;
|
2020-11-12 18:31:41 +03:00
|
|
|
con->v1.connect_seq++;
|
2020-11-09 18:29:47 +03:00
|
|
|
WARN_ON(ceph_con_flag_test(con, CEPH_CON_F_WRITE_PENDING));
|
|
|
|
WARN_ON(ceph_con_flag_test(con, CEPH_CON_F_KEEPALIVE_PENDING));
|
2011-03-04 23:25:05 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* Queue up an outgoing message on the given connection.
|
2020-11-18 18:37:14 +03:00
|
|
|
*
|
|
|
|
* Consumes a ref on @msg.
|
2009-10-06 22:31:13 +04:00
|
|
|
*/
|
|
|
|
void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg)
|
|
|
|
{
|
|
|
|
/* set src+dst */
|
2010-03-26 01:45:38 +03:00
|
|
|
msg->hdr.src = con->msgr->inst.name;
|
2010-03-02 02:25:00 +03:00
|
|
|
BUG_ON(msg->front.iov_len != le32_to_cpu(msg->hdr.front_len));
|
2010-05-12 08:20:38 +04:00
|
|
|
msg->needs_out_seq = true;
|
|
|
|
|
2009-12-22 21:43:42 +03:00
|
|
|
mutex_lock(&con->mutex);
|
libceph: have messages take a connection reference
There are essentially two types of ceph messages: incoming and
outgoing. Outgoing messages are always allocated via ceph_msg_new(),
and at the time of their allocation they are not associated with any
particular connection. Incoming messages are always allocated via
ceph_con_in_msg_alloc(), and they are initially associated with the
connection from which incoming data will be placed into the message.
When an outgoing message gets sent, it becomes associated with a
connection and remains that way until the message is successfully
sent. The association of an incoming message goes away at the point
it is sent to an upper layer via a con->ops->dispatch method.
This patch implements reference counting for all ceph messages, such
that every message holds a reference (and a pointer) to a connection
if and only if it is associated with that connection (as described
above).
For background, here is an explanation of the ceph message
lifecycle, emphasizing when an association exists between a message
and a connection.
Outgoing Messages
An outgoing message is "owned" by its allocator, from the time it is
allocated in ceph_msg_new() up to the point it gets queued for
sending in ceph_con_send(). Prior to that point the message's
msg->con pointer is null; at the point it is queued for sending its
message pointer is assigned to refer to the connection. At that
time the message is inserted into a connection's out_queue list.
When a message on the out_queue list has been sent to the socket
layer to be put on the wire, it is transferred out of that list and
into the connection's out_sent list. At that point it is still owned
by the connection, and will remain so until an acknowledgement is
received from the recipient that indicates the message was
successfully transferred. When such an acknowledgement is received
(in process_ack()), the message is removed from its list (in
ceph_msg_remove()), at which point it is no longer associated with
the connection.
So basically, any time a message is on one of a connection's lists,
it is associated with that connection. Reference counting outgoing
messages can thus be done at the points a message is added to the
out_queue (in ceph_con_send()) and the point it is removed from
either its two lists (in ceph_msg_remove())--at which point its
connection pointer becomes null.
Incoming Messages
When an incoming message on a connection is getting read (in
read_partial_message()) and there is no message in con->in_msg,
a new one is allocated using ceph_con_in_msg_alloc(). At that
point the message is associated with the connection. Once that
message has been completely and successfully read, it is passed to
upper layer code using the connection's con->ops->dispatch method.
At that point the association between the message and the connection
no longer exists.
Reference counting of connections for incoming messages can be done
by taking a reference to the connection when the message gets
allocated, and releasing that reference when it gets handed off
using the dispatch method.
We should never fail to get a connection reference for a
message--the since the caller should already hold one.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2012-06-04 23:43:33 +04:00
|
|
|
|
2020-11-09 16:59:02 +03:00
|
|
|
if (con->state == CEPH_CON_S_CLOSED) {
|
2012-07-21 02:34:04 +04:00
|
|
|
dout("con_send %p closed, dropping %p\n", con, msg);
|
|
|
|
ceph_msg_put(msg);
|
|
|
|
mutex_unlock(&con->mutex);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-11-02 19:13:58 +03:00
|
|
|
msg_con_set(msg, con);
|
libceph: have messages take a connection reference
There are essentially two types of ceph messages: incoming and
outgoing. Outgoing messages are always allocated via ceph_msg_new(),
and at the time of their allocation they are not associated with any
particular connection. Incoming messages are always allocated via
ceph_con_in_msg_alloc(), and they are initially associated with the
connection from which incoming data will be placed into the message.
When an outgoing message gets sent, it becomes associated with a
connection and remains that way until the message is successfully
sent. The association of an incoming message goes away at the point
it is sent to an upper layer via a con->ops->dispatch method.
This patch implements reference counting for all ceph messages, such
that every message holds a reference (and a pointer) to a connection
if and only if it is associated with that connection (as described
above).
For background, here is an explanation of the ceph message
lifecycle, emphasizing when an association exists between a message
and a connection.
Outgoing Messages
An outgoing message is "owned" by its allocator, from the time it is
allocated in ceph_msg_new() up to the point it gets queued for
sending in ceph_con_send(). Prior to that point the message's
msg->con pointer is null; at the point it is queued for sending its
message pointer is assigned to refer to the connection. At that
time the message is inserted into a connection's out_queue list.
When a message on the out_queue list has been sent to the socket
layer to be put on the wire, it is transferred out of that list and
into the connection's out_sent list. At that point it is still owned
by the connection, and will remain so until an acknowledgement is
received from the recipient that indicates the message was
successfully transferred. When such an acknowledgement is received
(in process_ack()), the message is removed from its list (in
ceph_msg_remove()), at which point it is no longer associated with
the connection.
So basically, any time a message is on one of a connection's lists,
it is associated with that connection. Reference counting outgoing
messages can thus be done at the points a message is added to the
out_queue (in ceph_con_send()) and the point it is removed from
either its two lists (in ceph_msg_remove())--at which point its
connection pointer becomes null.
Incoming Messages
When an incoming message on a connection is getting read (in
read_partial_message()) and there is no message in con->in_msg,
a new one is allocated using ceph_con_in_msg_alloc(). At that
point the message is associated with the connection. Once that
message has been completely and successfully read, it is passed to
upper layer code using the connection's con->ops->dispatch method.
At that point the association between the message and the connection
no longer exists.
Reference counting of connections for incoming messages can be done
by taking a reference to the connection when the message gets
allocated, and releasing that reference when it gets handed off
using the dispatch method.
We should never fail to get a connection reference for a
message--the since the caller should already hold one.
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2012-06-04 23:43:33 +04:00
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
BUG_ON(!list_empty(&msg->list_head));
|
|
|
|
list_add_tail(&msg->list_head, &con->out_queue);
|
|
|
|
dout("----- %p to %s%lld %d=%s len %d+%d+%d -----\n", msg,
|
|
|
|
ENTITY_NAME(con->peer_name), le16_to_cpu(msg->hdr.type),
|
|
|
|
ceph_msg_type_name(le16_to_cpu(msg->hdr.type)),
|
|
|
|
le32_to_cpu(msg->hdr.front_len),
|
|
|
|
le32_to_cpu(msg->hdr.middle_len),
|
|
|
|
le32_to_cpu(msg->hdr.data_len));
|
2012-07-21 02:33:04 +04:00
|
|
|
|
|
|
|
clear_standby(con);
|
2009-12-22 21:43:42 +03:00
|
|
|
mutex_unlock(&con->mutex);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/* if there wasn't anything waiting to send before, queue
|
|
|
|
* new work */
|
2020-11-09 18:29:47 +03:00
|
|
|
if (!ceph_con_flag_test_and_set(con, CEPH_CON_F_WRITE_PENDING))
|
2009-10-06 22:31:13 +04:00
|
|
|
queue_con(con);
|
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_con_send);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Revoke a message that was previously queued for send
|
|
|
|
*/
|
2012-06-01 23:56:43 +04:00
|
|
|
void ceph_msg_revoke(struct ceph_msg *msg)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2012-06-01 23:56:43 +04:00
|
|
|
struct ceph_connection *con = msg->con;
|
|
|
|
|
2015-11-02 19:13:58 +03:00
|
|
|
if (!con) {
|
|
|
|
dout("%s msg %p null con\n", __func__, msg);
|
2012-06-01 23:56:43 +04:00
|
|
|
return; /* Message not in our possession */
|
2015-11-02 19:13:58 +03:00
|
|
|
}
|
2012-06-01 23:56:43 +04:00
|
|
|
|
2009-12-22 21:43:42 +03:00
|
|
|
mutex_lock(&con->mutex);
|
2020-11-12 14:55:39 +03:00
|
|
|
if (list_empty(&msg->list_head)) {
|
|
|
|
WARN_ON(con->out_msg == msg);
|
|
|
|
dout("%s con %p msg %p not linked\n", __func__, con, msg);
|
|
|
|
mutex_unlock(&con->mutex);
|
|
|
|
return;
|
2010-07-05 23:15:14 +04:00
|
|
|
}
|
libceph: fix ceph_msg_revoke()
There are a number of problems with revoking a "was sending" message:
(1) We never make any attempt to revoke data - only kvecs contibute to
con->out_skip. However, once the header (envelope) is written to the
socket, our peer learns data_len and sets itself to expect at least
data_len bytes to follow front or front+middle. If ceph_msg_revoke()
is called while the messenger is sending message's data portion,
anything we send after that call is counted by the OSD towards the now
revoked message's data portion. The effects vary, the most common one
is the eventual hang - higher layers get stuck waiting for the reply to
the message that was sent out after ceph_msg_revoke() returned and
treated by the OSD as a bunch of data bytes. This is what Matt ran
into.
(2) Flat out zeroing con->out_kvec_bytes worth of bytes to handle kvecs
is wrong. If ceph_msg_revoke() is called before the tag is sent out or
while the messenger is sending the header, we will get a connection
reset, either due to a bad tag (0 is not a valid tag) or a bad header
CRC, which kind of defeats the purpose of revoke. Currently the kernel
client refuses to work with header CRCs disabled, but that will likely
change in the future, making this even worse.
(3) con->out_skip is not reset on connection reset, leading to one or
more spurious connection resets if we happen to get a real one between
con->out_skip is set in ceph_msg_revoke() and before it's cleared in
write_partial_skip().
Fixing (1) and (3) is trivial. The idea behind fixing (2) is to never
zero the tag or the header, i.e. send out tag+header regardless of when
ceph_msg_revoke() is called. That way the header is always correct, no
unnecessary resets are induced and revoke stands ready for disabled
CRCs. Since ceph_msg_revoke() rips out con->out_msg, introduce a new
"message out temp" and copy the header into it before sending.
Cc: stable@vger.kernel.org # 4.0+
Reported-by: Matt Conner <matt.conner@keepertech.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Tested-by: Matt Conner <matt.conner@keepertech.com>
Reviewed-by: Sage Weil <sage@redhat.com>
2015-12-28 13:18:34 +03:00
|
|
|
|
2020-11-12 14:55:39 +03:00
|
|
|
dout("%s con %p msg %p was linked\n", __func__, con, msg);
|
|
|
|
msg->hdr.seq = 0;
|
|
|
|
ceph_msg_remove(msg);
|
|
|
|
|
|
|
|
if (con->out_msg == msg) {
|
|
|
|
WARN_ON(con->state != CEPH_CON_S_OPEN);
|
|
|
|
dout("%s con %p msg %p was sending\n", __func__, con, msg);
|
2020-11-19 18:59:08 +03:00
|
|
|
if (ceph_msgr2(from_msgr(con->msgr)))
|
|
|
|
ceph_con_v2_revoke(con);
|
|
|
|
else
|
|
|
|
ceph_con_v1_revoke(con);
|
2020-11-12 14:55:39 +03:00
|
|
|
ceph_msg_put(con->out_msg);
|
libceph: fix ceph_msg_revoke()
There are a number of problems with revoking a "was sending" message:
(1) We never make any attempt to revoke data - only kvecs contibute to
con->out_skip. However, once the header (envelope) is written to the
socket, our peer learns data_len and sets itself to expect at least
data_len bytes to follow front or front+middle. If ceph_msg_revoke()
is called while the messenger is sending message's data portion,
anything we send after that call is counted by the OSD towards the now
revoked message's data portion. The effects vary, the most common one
is the eventual hang - higher layers get stuck waiting for the reply to
the message that was sent out after ceph_msg_revoke() returned and
treated by the OSD as a bunch of data bytes. This is what Matt ran
into.
(2) Flat out zeroing con->out_kvec_bytes worth of bytes to handle kvecs
is wrong. If ceph_msg_revoke() is called before the tag is sent out or
while the messenger is sending the header, we will get a connection
reset, either due to a bad tag (0 is not a valid tag) or a bad header
CRC, which kind of defeats the purpose of revoke. Currently the kernel
client refuses to work with header CRCs disabled, but that will likely
change in the future, making this even worse.
(3) con->out_skip is not reset on connection reset, leading to one or
more spurious connection resets if we happen to get a real one between
con->out_skip is set in ceph_msg_revoke() and before it's cleared in
write_partial_skip().
Fixing (1) and (3) is trivial. The idea behind fixing (2) is to never
zero the tag or the header, i.e. send out tag+header regardless of when
ceph_msg_revoke() is called. That way the header is always correct, no
unnecessary resets are induced and revoke stands ready for disabled
CRCs. Since ceph_msg_revoke() rips out con->out_msg, introduce a new
"message out temp" and copy the header into it before sending.
Cc: stable@vger.kernel.org # 4.0+
Reported-by: Matt Conner <matt.conner@keepertech.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Tested-by: Matt Conner <matt.conner@keepertech.com>
Reviewed-by: Sage Weil <sage@redhat.com>
2015-12-28 13:18:34 +03:00
|
|
|
con->out_msg = NULL;
|
2020-11-12 14:55:39 +03:00
|
|
|
} else {
|
|
|
|
dout("%s con %p msg %p not current, out_msg %p\n", __func__,
|
|
|
|
con, msg, con->out_msg);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
2009-12-22 21:43:42 +03:00
|
|
|
mutex_unlock(&con->mutex);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
2009-12-22 21:45:45 +03:00
|
|
|
/*
|
2010-01-14 04:03:23 +03:00
|
|
|
* Revoke a message that we may be reading data into
|
2009-12-22 21:45:45 +03:00
|
|
|
*/
|
2012-06-01 23:56:43 +04:00
|
|
|
void ceph_msg_revoke_incoming(struct ceph_msg *msg)
|
2009-12-22 21:45:45 +03:00
|
|
|
{
|
2015-11-02 19:13:58 +03:00
|
|
|
struct ceph_connection *con = msg->con;
|
2012-06-01 23:56:43 +04:00
|
|
|
|
2015-11-02 19:13:58 +03:00
|
|
|
if (!con) {
|
2012-06-01 23:56:43 +04:00
|
|
|
dout("%s msg %p null con\n", __func__, msg);
|
|
|
|
return; /* Message not in our possession */
|
|
|
|
}
|
|
|
|
|
2009-12-22 21:45:45 +03:00
|
|
|
mutex_lock(&con->mutex);
|
2012-06-01 23:56:43 +04:00
|
|
|
if (con->in_msg == msg) {
|
2020-11-12 14:55:39 +03:00
|
|
|
WARN_ON(con->state != CEPH_CON_S_OPEN);
|
|
|
|
dout("%s con %p msg %p was recving\n", __func__, con, msg);
|
2020-11-19 18:59:08 +03:00
|
|
|
if (ceph_msgr2(from_msgr(con->msgr)))
|
|
|
|
ceph_con_v2_revoke_incoming(con);
|
|
|
|
else
|
|
|
|
ceph_con_v1_revoke_incoming(con);
|
2009-12-22 21:45:45 +03:00
|
|
|
ceph_msg_put(con->in_msg);
|
|
|
|
con->in_msg = NULL;
|
|
|
|
} else {
|
2020-11-12 14:55:39 +03:00
|
|
|
dout("%s con %p msg %p not current, in_msg %p\n", __func__,
|
|
|
|
con, msg, con->in_msg);
|
2009-12-22 21:45:45 +03:00
|
|
|
}
|
|
|
|
mutex_unlock(&con->mutex);
|
|
|
|
}
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* Queue a keepalive byte to ensure the tcp connection is alive.
|
|
|
|
*/
|
|
|
|
void ceph_con_keepalive(struct ceph_connection *con)
|
|
|
|
{
|
2011-03-04 23:25:05 +03:00
|
|
|
dout("con_keepalive %p\n", con);
|
2012-07-21 02:33:04 +04:00
|
|
|
mutex_lock(&con->mutex);
|
2011-03-04 23:25:05 +03:00
|
|
|
clear_standby(con);
|
2020-11-09 18:29:47 +03:00
|
|
|
ceph_con_flag_set(con, CEPH_CON_F_KEEPALIVE_PENDING);
|
2012-07-21 02:33:04 +04:00
|
|
|
mutex_unlock(&con->mutex);
|
2019-01-14 23:13:10 +03:00
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
if (!ceph_con_flag_test_and_set(con, CEPH_CON_F_WRITE_PENDING))
|
2009-10-06 22:31:13 +04:00
|
|
|
queue_con(con);
|
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_con_keepalive);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2015-09-01 12:19:38 +03:00
|
|
|
bool ceph_con_keepalive_expired(struct ceph_connection *con,
|
|
|
|
unsigned long interval)
|
|
|
|
{
|
|
|
|
if (interval > 0 &&
|
|
|
|
(con->peer_features & CEPH_FEATURE_MSGR_KEEPALIVE2)) {
|
2018-07-13 23:18:34 +03:00
|
|
|
struct timespec64 now;
|
|
|
|
struct timespec64 ts;
|
|
|
|
ktime_get_real_ts64(&now);
|
|
|
|
jiffies_to_timespec64(interval, &ts);
|
|
|
|
ts = timespec64_add(con->last_keepalive_ack, ts);
|
|
|
|
return timespec64_compare(&now, &ts) >= 0;
|
2015-09-01 12:19:38 +03:00
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-10-15 18:38:23 +03:00
|
|
|
static struct ceph_msg_data *ceph_msg_data_add(struct ceph_msg *msg)
|
2013-03-02 04:00:16 +04:00
|
|
|
{
|
2018-10-15 18:38:23 +03:00
|
|
|
BUG_ON(msg->num_data_items >= msg->max_data_items);
|
|
|
|
return &msg->data[msg->num_data_items++];
|
2013-03-12 08:34:24 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void ceph_msg_data_destroy(struct ceph_msg_data *data)
|
|
|
|
{
|
2020-03-10 18:19:01 +03:00
|
|
|
if (data->type == CEPH_MSG_DATA_PAGES && data->own_pages) {
|
|
|
|
int num_pages = calc_pages_for(data->alignment, data->length);
|
|
|
|
ceph_release_page_vector(data->pages, num_pages);
|
|
|
|
} else if (data->type == CEPH_MSG_DATA_PAGELIST) {
|
2013-03-12 08:34:24 +04:00
|
|
|
ceph_pagelist_release(data->pagelist);
|
2020-03-10 18:19:01 +03:00
|
|
|
}
|
2013-03-02 04:00:16 +04:00
|
|
|
}
|
|
|
|
|
2013-04-05 23:46:01 +04:00
|
|
|
void ceph_msg_data_add_pages(struct ceph_msg *msg, struct page **pages,
|
2020-03-10 18:19:01 +03:00
|
|
|
size_t length, size_t alignment, bool own_pages)
|
2013-02-14 22:16:43 +04:00
|
|
|
{
|
2013-03-12 08:34:24 +04:00
|
|
|
struct ceph_msg_data *data;
|
|
|
|
|
2013-03-05 04:29:06 +04:00
|
|
|
BUG_ON(!pages);
|
|
|
|
BUG_ON(!length);
|
2013-03-12 08:34:24 +04:00
|
|
|
|
2018-10-15 18:38:23 +03:00
|
|
|
data = ceph_msg_data_add(msg);
|
|
|
|
data->type = CEPH_MSG_DATA_PAGES;
|
2013-03-12 08:34:24 +04:00
|
|
|
data->pages = pages;
|
|
|
|
data->length = length;
|
|
|
|
data->alignment = alignment & ~PAGE_MASK;
|
2020-03-10 18:19:01 +03:00
|
|
|
data->own_pages = own_pages;
|
2013-02-14 22:16:43 +04:00
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
msg->data_length += length;
|
2013-02-14 22:16:43 +04:00
|
|
|
}
|
2013-04-05 23:46:01 +04:00
|
|
|
EXPORT_SYMBOL(ceph_msg_data_add_pages);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2013-04-05 23:46:01 +04:00
|
|
|
void ceph_msg_data_add_pagelist(struct ceph_msg *msg,
|
2013-02-14 22:16:43 +04:00
|
|
|
struct ceph_pagelist *pagelist)
|
|
|
|
{
|
2013-03-12 08:34:24 +04:00
|
|
|
struct ceph_msg_data *data;
|
|
|
|
|
2013-03-05 04:29:06 +04:00
|
|
|
BUG_ON(!pagelist);
|
|
|
|
BUG_ON(!pagelist->length);
|
2013-02-14 22:16:43 +04:00
|
|
|
|
2018-10-15 18:38:23 +03:00
|
|
|
data = ceph_msg_data_add(msg);
|
|
|
|
data->type = CEPH_MSG_DATA_PAGELIST;
|
2018-09-28 17:02:53 +03:00
|
|
|
refcount_inc(&pagelist->refcnt);
|
2013-03-12 08:34:24 +04:00
|
|
|
data->pagelist = pagelist;
|
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
msg->data_length += pagelist->length;
|
2013-02-14 22:16:43 +04:00
|
|
|
}
|
2013-04-05 23:46:01 +04:00
|
|
|
EXPORT_SYMBOL(ceph_msg_data_add_pagelist);
|
2013-02-14 22:16:43 +04:00
|
|
|
|
2013-04-05 23:46:01 +04:00
|
|
|
#ifdef CONFIG_BLOCK
|
2018-01-20 12:30:10 +03:00
|
|
|
void ceph_msg_data_add_bio(struct ceph_msg *msg, struct ceph_bio_iter *bio_pos,
|
|
|
|
u32 length)
|
2013-02-14 22:16:43 +04:00
|
|
|
{
|
2013-03-12 08:34:24 +04:00
|
|
|
struct ceph_msg_data *data;
|
|
|
|
|
2018-10-15 18:38:23 +03:00
|
|
|
data = ceph_msg_data_add(msg);
|
|
|
|
data->type = CEPH_MSG_DATA_BIO;
|
2018-01-20 12:30:10 +03:00
|
|
|
data->bio_pos = *bio_pos;
|
2013-04-05 23:46:01 +04:00
|
|
|
data->bio_length = length;
|
2013-03-12 08:34:24 +04:00
|
|
|
|
2013-03-14 23:09:06 +04:00
|
|
|
msg->data_length += length;
|
2013-02-14 22:16:43 +04:00
|
|
|
}
|
2013-04-05 23:46:01 +04:00
|
|
|
EXPORT_SYMBOL(ceph_msg_data_add_bio);
|
2013-04-05 23:46:01 +04:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2013-02-14 22:16:43 +04:00
|
|
|
|
2018-01-20 12:30:11 +03:00
|
|
|
void ceph_msg_data_add_bvecs(struct ceph_msg *msg,
|
|
|
|
struct ceph_bvec_iter *bvec_pos)
|
|
|
|
{
|
|
|
|
struct ceph_msg_data *data;
|
|
|
|
|
2018-10-15 18:38:23 +03:00
|
|
|
data = ceph_msg_data_add(msg);
|
|
|
|
data->type = CEPH_MSG_DATA_BVECS;
|
2018-01-20 12:30:11 +03:00
|
|
|
data->bvec_pos = *bvec_pos;
|
|
|
|
|
|
|
|
msg->data_length += bvec_pos->iter.bi_size;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_msg_data_add_bvecs);
|
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/*
|
|
|
|
* construct a new message with given type, size
|
|
|
|
* the new msg has a ref count of 1.
|
|
|
|
*/
|
2018-10-15 18:38:23 +03:00
|
|
|
struct ceph_msg *ceph_msg_new2(int type, int front_len, int max_data_items,
|
|
|
|
gfp_t flags, bool can_fail)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
|
|
|
struct ceph_msg *m;
|
|
|
|
|
2013-05-01 21:43:04 +04:00
|
|
|
m = kmem_cache_zalloc(ceph_msg_cache, flags);
|
2009-10-06 22:31:13 +04:00
|
|
|
if (m == NULL)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
m->hdr.type = cpu_to_le16(type);
|
2010-05-12 02:01:51 +04:00
|
|
|
m->hdr.priority = cpu_to_le16(CEPH_MSG_PRIO_DEFAULT);
|
2009-10-06 22:31:13 +04:00
|
|
|
m->hdr.front_len = cpu_to_le32(front_len);
|
2011-05-03 06:29:56 +04:00
|
|
|
|
2013-03-02 04:00:16 +04:00
|
|
|
INIT_LIST_HEAD(&m->list_head);
|
|
|
|
kref_init(&m->kref);
|
2011-05-03 06:29:56 +04:00
|
|
|
|
2009-10-06 22:31:13 +04:00
|
|
|
/* front */
|
|
|
|
if (front_len) {
|
2022-01-15 01:07:07 +03:00
|
|
|
m->front.iov_base = kvmalloc(front_len, flags);
|
2009-10-06 22:31:13 +04:00
|
|
|
if (m->front.iov_base == NULL) {
|
2011-08-10 02:03:46 +04:00
|
|
|
dout("ceph_msg_new can't allocate %d bytes\n",
|
2009-10-06 22:31:13 +04:00
|
|
|
front_len);
|
|
|
|
goto out2;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
m->front.iov_base = NULL;
|
|
|
|
}
|
2014-01-09 22:08:21 +04:00
|
|
|
m->front_alloc_len = m->front.iov_len = front_len;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2018-10-15 18:38:23 +03:00
|
|
|
if (max_data_items) {
|
|
|
|
m->data = kmalloc_array(max_data_items, sizeof(*m->data),
|
|
|
|
flags);
|
|
|
|
if (!m->data)
|
|
|
|
goto out2;
|
|
|
|
|
|
|
|
m->max_data_items = max_data_items;
|
|
|
|
}
|
|
|
|
|
2010-04-02 03:07:23 +04:00
|
|
|
dout("ceph_msg_new %p front %d\n", m, front_len);
|
2009-10-06 22:31:13 +04:00
|
|
|
return m;
|
|
|
|
|
|
|
|
out2:
|
|
|
|
ceph_msg_put(m);
|
|
|
|
out:
|
2011-08-10 02:03:46 +04:00
|
|
|
if (!can_fail) {
|
|
|
|
pr_err("msg_new can't create type %d front %d\n", type,
|
|
|
|
front_len);
|
2011-08-10 02:05:07 +04:00
|
|
|
WARN_ON(1);
|
2011-08-10 02:03:46 +04:00
|
|
|
} else {
|
|
|
|
dout("msg_new can't create type %d front %d\n", type,
|
|
|
|
front_len);
|
|
|
|
}
|
2010-04-02 03:06:19 +04:00
|
|
|
return NULL;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
2018-10-15 18:38:23 +03:00
|
|
|
EXPORT_SYMBOL(ceph_msg_new2);
|
|
|
|
|
|
|
|
struct ceph_msg *ceph_msg_new(int type, int front_len, gfp_t flags,
|
|
|
|
bool can_fail)
|
|
|
|
{
|
|
|
|
return ceph_msg_new2(type, front_len, 0, flags, can_fail);
|
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_msg_new);
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate "middle" portion of a message, if it is needed and wasn't
|
|
|
|
* allocated by alloc_msg. This allows us to read a small fixed-size
|
|
|
|
* per-type header in the front and then gracefully fail (i.e.,
|
|
|
|
* propagate the error to the caller based on info in the front) when
|
|
|
|
* the middle is too large.
|
|
|
|
*/
|
2010-01-09 00:58:34 +03:00
|
|
|
static int ceph_alloc_middle(struct ceph_connection *con, struct ceph_msg *msg)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
|
|
|
int type = le16_to_cpu(msg->hdr.type);
|
|
|
|
int middle_len = le32_to_cpu(msg->hdr.middle_len);
|
|
|
|
|
|
|
|
dout("alloc_middle %p type %d %s middle_len %d\n", msg, type,
|
|
|
|
ceph_msg_type_name(type), middle_len);
|
|
|
|
BUG_ON(!middle_len);
|
|
|
|
BUG_ON(msg->middle);
|
|
|
|
|
2009-12-07 23:17:17 +03:00
|
|
|
msg->middle = ceph_buffer_new(middle_len, GFP_NOFS);
|
2009-10-06 22:31:13 +04:00
|
|
|
if (!msg->middle)
|
|
|
|
return -ENOMEM;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-01-09 00:58:34 +03:00
|
|
|
/*
|
2012-06-04 23:43:32 +04:00
|
|
|
* Allocate a message for receiving an incoming message on a
|
|
|
|
* connection, and save the result in con->in_msg. Uses the
|
|
|
|
* connection's private alloc_msg op if available.
|
|
|
|
*
|
2012-07-31 05:19:30 +04:00
|
|
|
* Returns 0 on success, or a negative error code.
|
|
|
|
*
|
|
|
|
* On success, if we set *skip = 1:
|
|
|
|
* - the next message should be skipped and ignored.
|
|
|
|
* - con->in_msg == NULL
|
|
|
|
* or if we set *skip = 0:
|
|
|
|
* - con->in_msg is non-null.
|
|
|
|
* On error (ENOMEM, EAGAIN, ...),
|
|
|
|
* - con->in_msg == NULL
|
2010-01-09 00:58:34 +03:00
|
|
|
*/
|
2020-11-09 18:29:47 +03:00
|
|
|
int ceph_con_in_msg_alloc(struct ceph_connection *con,
|
|
|
|
struct ceph_msg_header *hdr, int *skip)
|
2010-01-09 00:58:34 +03:00
|
|
|
{
|
|
|
|
int middle_len = le32_to_cpu(hdr->middle_len);
|
2013-03-02 04:00:14 +04:00
|
|
|
struct ceph_msg *msg;
|
2012-07-31 05:19:30 +04:00
|
|
|
int ret = 0;
|
2010-01-09 00:58:34 +03:00
|
|
|
|
2012-06-04 23:43:32 +04:00
|
|
|
BUG_ON(con->in_msg != NULL);
|
2013-03-02 04:00:14 +04:00
|
|
|
BUG_ON(!con->ops->alloc_msg);
|
2010-01-09 00:58:34 +03:00
|
|
|
|
2013-03-02 04:00:14 +04:00
|
|
|
mutex_unlock(&con->mutex);
|
|
|
|
msg = con->ops->alloc_msg(con, hdr, skip);
|
|
|
|
mutex_lock(&con->mutex);
|
2020-11-09 16:59:02 +03:00
|
|
|
if (con->state != CEPH_CON_S_OPEN) {
|
2013-03-02 04:00:14 +04:00
|
|
|
if (msg)
|
2013-03-02 04:00:14 +04:00
|
|
|
ceph_msg_put(msg);
|
2013-03-02 04:00:14 +04:00
|
|
|
return -EAGAIN;
|
|
|
|
}
|
2013-03-05 19:25:10 +04:00
|
|
|
if (msg) {
|
|
|
|
BUG_ON(*skip);
|
2015-11-02 19:13:58 +03:00
|
|
|
msg_con_set(msg, con);
|
2013-03-05 19:25:10 +04:00
|
|
|
con->in_msg = msg;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Null message pointer means either we should skip
|
|
|
|
* this message or we couldn't allocate memory. The
|
|
|
|
* former is not an error.
|
|
|
|
*/
|
|
|
|
if (*skip)
|
|
|
|
return 0;
|
|
|
|
|
2015-03-23 14:52:40 +03:00
|
|
|
con->error_msg = "error allocating memory for incoming message";
|
2013-03-02 04:00:14 +04:00
|
|
|
return -ENOMEM;
|
2010-01-09 00:58:34 +03:00
|
|
|
}
|
2020-11-16 19:27:50 +03:00
|
|
|
memcpy(&con->in_msg->hdr, hdr, sizeof(*hdr));
|
2010-01-09 00:58:34 +03:00
|
|
|
|
2012-06-04 23:43:32 +04:00
|
|
|
if (middle_len && !con->in_msg->middle) {
|
|
|
|
ret = ceph_alloc_middle(con, con->in_msg);
|
2010-01-09 00:58:34 +03:00
|
|
|
if (ret < 0) {
|
2012-06-04 23:43:32 +04:00
|
|
|
ceph_msg_put(con->in_msg);
|
|
|
|
con->in_msg = NULL;
|
2010-01-09 00:58:34 +03:00
|
|
|
}
|
|
|
|
}
|
2010-01-11 21:32:02 +03:00
|
|
|
|
2012-07-31 05:19:30 +04:00
|
|
|
return ret;
|
2010-01-09 00:58:34 +03:00
|
|
|
}
|
|
|
|
|
2020-11-09 18:29:47 +03:00
|
|
|
void ceph_con_get_out_msg(struct ceph_connection *con)
|
2020-11-18 18:37:14 +03:00
|
|
|
{
|
|
|
|
struct ceph_msg *msg;
|
|
|
|
|
|
|
|
BUG_ON(list_empty(&con->out_queue));
|
|
|
|
msg = list_first_entry(&con->out_queue, struct ceph_msg, list_head);
|
|
|
|
WARN_ON(msg->con != con);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Put the message on "sent" list using a ref from ceph_con_send().
|
|
|
|
* It is put when the message is acked or revoked.
|
|
|
|
*/
|
|
|
|
list_move_tail(&msg->list_head, &con->out_sent);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only assign outgoing seq # if we haven't sent this message
|
|
|
|
* yet. If it is requeued, resend with it's original seq.
|
|
|
|
*/
|
|
|
|
if (msg->needs_out_seq) {
|
|
|
|
msg->hdr.seq = cpu_to_le64(++con->out_seq);
|
|
|
|
msg->needs_out_seq = false;
|
|
|
|
|
|
|
|
if (con->ops->reencode_message)
|
|
|
|
con->ops->reencode_message(msg);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get a ref for out_msg. It is put when we are done sending the
|
|
|
|
* message or in case of a fault.
|
|
|
|
*/
|
|
|
|
WARN_ON(con->out_msg);
|
|
|
|
con->out_msg = ceph_msg_get(msg);
|
|
|
|
}
|
2009-10-06 22:31:13 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Free a generically kmalloc'd message.
|
|
|
|
*/
|
2014-06-20 14:14:41 +04:00
|
|
|
static void ceph_msg_free(struct ceph_msg *m)
|
2009-10-06 22:31:13 +04:00
|
|
|
{
|
2014-06-20 14:14:41 +04:00
|
|
|
dout("%s %p\n", __func__, m);
|
2014-10-23 16:32:57 +04:00
|
|
|
kvfree(m->front.iov_base);
|
2018-10-15 18:38:23 +03:00
|
|
|
kfree(m->data);
|
2013-05-01 21:43:04 +04:00
|
|
|
kmem_cache_free(ceph_msg_cache, m);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
|
|
|
|
2014-06-20 14:14:41 +04:00
|
|
|
static void ceph_msg_release(struct kref *kref)
|
2009-12-08 02:55:05 +03:00
|
|
|
{
|
|
|
|
struct ceph_msg *m = container_of(kref, struct ceph_msg, kref);
|
2018-10-15 18:38:23 +03:00
|
|
|
int i;
|
2009-10-06 22:31:13 +04:00
|
|
|
|
2014-06-20 14:14:41 +04:00
|
|
|
dout("%s %p\n", __func__, m);
|
2009-12-08 02:55:05 +03:00
|
|
|
WARN_ON(!list_empty(&m->list_head));
|
|
|
|
|
2015-11-02 19:13:58 +03:00
|
|
|
msg_con_set(m, NULL);
|
|
|
|
|
2009-12-08 02:55:05 +03:00
|
|
|
/* drop middle, data, if any */
|
|
|
|
if (m->middle) {
|
|
|
|
ceph_buffer_put(m->middle);
|
|
|
|
m->middle = NULL;
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
2013-03-14 23:09:06 +04:00
|
|
|
|
2018-10-15 18:38:23 +03:00
|
|
|
for (i = 0; i < m->num_data_items; i++)
|
|
|
|
ceph_msg_data_destroy(&m->data[i]);
|
2009-12-23 23:12:31 +03:00
|
|
|
|
2009-12-08 02:55:05 +03:00
|
|
|
if (m->pool)
|
|
|
|
ceph_msgpool_put(m->pool, m);
|
|
|
|
else
|
2014-06-20 14:14:41 +04:00
|
|
|
ceph_msg_free(m);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct ceph_msg *ceph_msg_get(struct ceph_msg *msg)
|
|
|
|
{
|
|
|
|
dout("%s %p (was %d)\n", __func__, msg,
|
2016-11-14 19:29:48 +03:00
|
|
|
kref_read(&msg->kref));
|
2014-06-20 14:14:41 +04:00
|
|
|
kref_get(&msg->kref);
|
|
|
|
return msg;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ceph_msg_get);
|
|
|
|
|
|
|
|
void ceph_msg_put(struct ceph_msg *msg)
|
|
|
|
{
|
|
|
|
dout("%s %p (was %d)\n", __func__, msg,
|
2016-11-14 19:29:48 +03:00
|
|
|
kref_read(&msg->kref));
|
2014-06-20 14:14:41 +04:00
|
|
|
kref_put(&msg->kref, ceph_msg_release);
|
2009-10-06 22:31:13 +04:00
|
|
|
}
|
2014-06-20 14:14:41 +04:00
|
|
|
EXPORT_SYMBOL(ceph_msg_put);
|
2009-12-15 02:13:47 +03:00
|
|
|
|
|
|
|
void ceph_msg_dump(struct ceph_msg *msg)
|
|
|
|
{
|
2014-01-09 22:08:21 +04:00
|
|
|
pr_debug("msg_dump %p (front_alloc_len %d length %zd)\n", msg,
|
|
|
|
msg->front_alloc_len, msg->data_length);
|
2009-12-15 02:13:47 +03:00
|
|
|
print_hex_dump(KERN_DEBUG, "header: ",
|
|
|
|
DUMP_PREFIX_OFFSET, 16, 1,
|
|
|
|
&msg->hdr, sizeof(msg->hdr), true);
|
|
|
|
print_hex_dump(KERN_DEBUG, " front: ",
|
|
|
|
DUMP_PREFIX_OFFSET, 16, 1,
|
|
|
|
msg->front.iov_base, msg->front.iov_len, true);
|
|
|
|
if (msg->middle)
|
|
|
|
print_hex_dump(KERN_DEBUG, "middle: ",
|
|
|
|
DUMP_PREFIX_OFFSET, 16, 1,
|
|
|
|
msg->middle->vec.iov_base,
|
|
|
|
msg->middle->vec.iov_len, true);
|
|
|
|
print_hex_dump(KERN_DEBUG, "footer: ",
|
|
|
|
DUMP_PREFIX_OFFSET, 16, 1,
|
|
|
|
&msg->footer, sizeof(msg->footer), true);
|
|
|
|
}
|
2010-04-07 02:14:15 +04:00
|
|
|
EXPORT_SYMBOL(ceph_msg_dump);
|