License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2009-10-06 22:31:10 +04:00
|
|
|
#ifndef _FS_CEPH_OSD_CLIENT_H
|
|
|
|
#define _FS_CEPH_OSD_CLIENT_H
|
|
|
|
|
2017-06-19 13:18:05 +03:00
|
|
|
#include <linux/bitrev.h>
|
2009-10-06 22:31:10 +04:00
|
|
|
#include <linux/completion.h>
|
2009-12-08 00:37:03 +03:00
|
|
|
#include <linux/kref.h>
|
2009-10-06 22:31:10 +04:00
|
|
|
#include <linux/mempool.h>
|
|
|
|
#include <linux/rbtree.h>
|
2017-03-17 15:10:28 +03:00
|
|
|
#include <linux/refcount.h>
|
2009-10-06 22:31:10 +04:00
|
|
|
|
2012-05-17 00:16:38 +04:00
|
|
|
#include <linux/ceph/types.h>
|
|
|
|
#include <linux/ceph/osdmap.h>
|
|
|
|
#include <linux/ceph/messenger.h>
|
2016-06-07 22:57:15 +03:00
|
|
|
#include <linux/ceph/msgpool.h>
|
2012-05-17 00:16:38 +04:00
|
|
|
#include <linux/ceph/auth.h>
|
2012-11-14 07:11:15 +04:00
|
|
|
#include <linux/ceph/pagelist.h>
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
struct ceph_msg;
|
|
|
|
struct ceph_snap_context;
|
|
|
|
struct ceph_osd_request;
|
|
|
|
struct ceph_osd_client;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* completion callback for async writepages
|
|
|
|
*/
|
2016-04-28 17:07:24 +03:00
|
|
|
typedef void (*ceph_osdc_callback_t)(struct ceph_osd_request *);
|
2009-10-06 22:31:10 +04:00
|
|
|
|
2016-04-28 17:07:23 +03:00
|
|
|
#define CEPH_HOMELESS_OSD -1
|
|
|
|
|
2009-10-06 22:31:10 +04:00
|
|
|
/* a given osd we're communicating with */
|
|
|
|
struct ceph_osd {
|
2017-03-17 15:10:28 +03:00
|
|
|
refcount_t o_ref;
|
2009-10-06 22:31:10 +04:00
|
|
|
struct ceph_osd_client *o_osdc;
|
|
|
|
int o_osd;
|
|
|
|
int o_incarnation;
|
|
|
|
struct rb_node o_node;
|
|
|
|
struct ceph_connection o_con;
|
2016-04-28 17:07:26 +03:00
|
|
|
struct rb_root o_requests;
|
2016-05-26 02:15:02 +03:00
|
|
|
struct rb_root o_linger_requests;
|
2017-06-19 13:18:05 +03:00
|
|
|
struct rb_root o_backoff_mappings;
|
|
|
|
struct rb_root o_backoffs_by_id;
|
2010-02-03 22:00:26 +03:00
|
|
|
struct list_head o_osd_lru;
|
2012-05-17 00:16:38 +04:00
|
|
|
struct ceph_auth_handshake o_auth;
|
2010-02-03 22:00:26 +03:00
|
|
|
unsigned long lru_ttl;
|
2010-02-27 02:32:31 +03:00
|
|
|
struct list_head o_keepalive_item;
|
2016-04-28 17:07:26 +03:00
|
|
|
struct mutex lock;
|
2009-10-06 22:31:10 +04:00
|
|
|
};
|
|
|
|
|
2016-02-09 19:50:15 +03:00
|
|
|
#define CEPH_OSD_SLAB_OPS 2
|
|
|
|
#define CEPH_OSD_MAX_OPS 16
|
2013-02-26 04:11:12 +04:00
|
|
|
|
2013-02-14 22:16:43 +04:00
|
|
|
enum ceph_osd_data_type {
|
2013-04-05 10:27:12 +04:00
|
|
|
CEPH_OSD_DATA_TYPE_NONE = 0,
|
2013-02-14 22:16:43 +04:00
|
|
|
CEPH_OSD_DATA_TYPE_PAGES,
|
2013-03-08 23:35:36 +04:00
|
|
|
CEPH_OSD_DATA_TYPE_PAGELIST,
|
2013-02-14 22:16:43 +04:00
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
CEPH_OSD_DATA_TYPE_BIO,
|
|
|
|
#endif /* CONFIG_BLOCK */
|
2018-01-20 12:30:11 +03:00
|
|
|
CEPH_OSD_DATA_TYPE_BVECS,
|
2013-02-14 22:16:43 +04:00
|
|
|
};
|
|
|
|
|
2013-02-14 22:16:43 +04:00
|
|
|
struct ceph_osd_data {
|
2013-02-14 22:16:43 +04:00
|
|
|
enum ceph_osd_data_type type;
|
|
|
|
union {
|
2013-02-14 22:16:43 +04:00
|
|
|
struct {
|
|
|
|
struct page **pages;
|
2013-03-08 01:38:25 +04:00
|
|
|
u64 length;
|
2013-02-14 22:16:43 +04:00
|
|
|
u32 alignment;
|
|
|
|
bool pages_from_pool;
|
|
|
|
bool own_pages;
|
|
|
|
};
|
2013-03-08 23:35:36 +04:00
|
|
|
struct ceph_pagelist *pagelist;
|
2013-02-14 22:16:43 +04:00
|
|
|
#ifdef CONFIG_BLOCK
|
2013-03-14 23:09:06 +04:00
|
|
|
struct {
|
2018-01-20 12:30:10 +03:00
|
|
|
struct ceph_bio_iter bio_pos;
|
|
|
|
u32 bio_length;
|
2013-03-14 23:09:06 +04:00
|
|
|
};
|
2013-02-14 22:16:43 +04:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2018-05-04 17:57:30 +03:00
|
|
|
struct {
|
|
|
|
struct ceph_bvec_iter bvec_pos;
|
|
|
|
u32 num_bvecs;
|
|
|
|
};
|
2013-02-14 22:16:43 +04:00
|
|
|
};
|
|
|
|
};
|
|
|
|
|
2013-04-04 06:32:51 +04:00
|
|
|
struct ceph_osd_req_op {
|
|
|
|
u16 op; /* CEPH_OSD_OP_* */
|
2014-02-25 18:22:26 +04:00
|
|
|
u32 flags; /* CEPH_OSD_OP_FLAG_* */
|
2016-02-08 15:39:46 +03:00
|
|
|
u32 indata_len; /* request */
|
2016-01-07 11:48:57 +03:00
|
|
|
u32 outdata_len; /* reply */
|
|
|
|
s32 rval;
|
|
|
|
|
2013-04-04 06:32:51 +04:00
|
|
|
union {
|
2013-02-11 22:33:24 +04:00
|
|
|
struct ceph_osd_data raw_data_in;
|
2013-04-04 06:32:51 +04:00
|
|
|
struct {
|
|
|
|
u64 offset, length;
|
|
|
|
u64 truncate_size;
|
|
|
|
u32 truncate_seq;
|
2013-04-05 10:27:12 +04:00
|
|
|
struct ceph_osd_data osd_data;
|
2013-04-04 06:32:51 +04:00
|
|
|
} extent;
|
2014-11-12 09:00:43 +03:00
|
|
|
struct {
|
2014-12-19 14:00:41 +03:00
|
|
|
u32 name_len;
|
|
|
|
u32 value_len;
|
2014-11-12 09:00:43 +03:00
|
|
|
__u8 cmp_op; /* CEPH_OSD_CMPXATTR_OP_* */
|
|
|
|
__u8 cmp_mode; /* CEPH_OSD_CMPXATTR_MODE_* */
|
|
|
|
struct ceph_osd_data osd_data;
|
|
|
|
} xattr;
|
2013-04-04 06:32:51 +04:00
|
|
|
struct {
|
|
|
|
const char *class_name;
|
|
|
|
const char *method_name;
|
2013-04-05 10:27:12 +04:00
|
|
|
struct ceph_osd_data request_info;
|
2013-04-05 23:46:02 +04:00
|
|
|
struct ceph_osd_data request_data;
|
2013-04-05 10:27:12 +04:00
|
|
|
struct ceph_osd_data response_data;
|
2013-04-04 06:32:51 +04:00
|
|
|
__u8 class_len;
|
|
|
|
__u8 method_len;
|
2016-05-26 01:29:52 +03:00
|
|
|
u32 indata_len;
|
2013-04-04 06:32:51 +04:00
|
|
|
} cls;
|
|
|
|
struct {
|
|
|
|
u64 cookie;
|
2016-05-26 02:15:02 +03:00
|
|
|
__u8 op; /* CEPH_OSD_WATCH_OP_ */
|
|
|
|
u32 gen;
|
2013-04-04 06:32:51 +04:00
|
|
|
} watch;
|
2016-05-26 02:15:02 +03:00
|
|
|
struct {
|
|
|
|
struct ceph_osd_data request_data;
|
|
|
|
} notify_ack;
|
2016-04-28 17:07:27 +03:00
|
|
|
struct {
|
|
|
|
u64 cookie;
|
|
|
|
struct ceph_osd_data request_data;
|
|
|
|
struct ceph_osd_data response_data;
|
|
|
|
} notify;
|
2015-07-17 23:18:07 +03:00
|
|
|
struct {
|
|
|
|
struct ceph_osd_data response_data;
|
|
|
|
} list_watchers;
|
2014-02-25 18:22:27 +04:00
|
|
|
struct {
|
|
|
|
u64 expected_object_size;
|
|
|
|
u64 expected_write_size;
|
|
|
|
} alloc_hint;
|
2018-10-15 18:45:58 +03:00
|
|
|
struct {
|
|
|
|
u64 snapid;
|
|
|
|
u64 src_version;
|
|
|
|
u8 flags;
|
|
|
|
u32 src_fadvise_flags;
|
|
|
|
struct ceph_osd_data osd_data;
|
|
|
|
} copy_from;
|
2013-04-04 06:32:51 +04:00
|
|
|
};
|
|
|
|
};
|
|
|
|
|
2016-04-28 17:07:23 +03:00
|
|
|
struct ceph_osd_request_target {
|
|
|
|
struct ceph_object_id base_oid;
|
|
|
|
struct ceph_object_locator base_oloc;
|
|
|
|
struct ceph_object_id target_oid;
|
|
|
|
struct ceph_object_locator target_oloc;
|
|
|
|
|
2017-06-15 17:30:53 +03:00
|
|
|
struct ceph_pg pgid; /* last raw pg we mapped to */
|
|
|
|
struct ceph_spg spgid; /* last actual spg we mapped to */
|
2016-04-28 17:07:23 +03:00
|
|
|
u32 pg_num;
|
|
|
|
u32 pg_num_mask;
|
|
|
|
struct ceph_osds acting;
|
|
|
|
struct ceph_osds up;
|
|
|
|
int size;
|
|
|
|
int min_size;
|
|
|
|
bool sort_bitwise;
|
2017-07-27 18:59:14 +03:00
|
|
|
bool recovery_deletes;
|
2016-04-28 17:07:23 +03:00
|
|
|
|
|
|
|
unsigned int flags; /* CEPH_OSD_FLAG_* */
|
|
|
|
bool paused;
|
|
|
|
|
2017-06-15 17:30:55 +03:00
|
|
|
u32 epoch;
|
2017-06-05 15:45:00 +03:00
|
|
|
u32 last_force_resend;
|
|
|
|
|
2016-04-28 17:07:23 +03:00
|
|
|
int osd;
|
|
|
|
};
|
|
|
|
|
2009-10-06 22:31:10 +04:00
|
|
|
/* an in-flight request */
|
|
|
|
struct ceph_osd_request {
|
|
|
|
u64 r_tid; /* unique for this client */
|
|
|
|
struct rb_node r_node;
|
2016-04-28 17:07:27 +03:00
|
|
|
struct rb_node r_mc_node; /* map check */
|
2018-05-21 17:00:29 +03:00
|
|
|
struct work_struct r_complete_work;
|
2009-10-06 22:31:10 +04:00
|
|
|
struct ceph_osd *r_osd;
|
2016-04-28 17:07:23 +03:00
|
|
|
|
|
|
|
struct ceph_osd_request_target r_t;
|
|
|
|
#define r_base_oid r_t.base_oid
|
|
|
|
#define r_base_oloc r_t.base_oloc
|
|
|
|
#define r_flags r_t.flags
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
struct ceph_msg *r_request, *r_reply;
|
|
|
|
u32 r_sent; /* >0 if r_request is sending/sent */
|
2013-02-26 04:11:12 +04:00
|
|
|
|
2013-04-04 06:32:51 +04:00
|
|
|
/* request osd ops array */
|
|
|
|
unsigned int r_num_ops;
|
|
|
|
|
2013-02-26 04:11:12 +04:00
|
|
|
int r_result;
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
struct ceph_osd_client *r_osdc;
|
2009-12-08 00:37:03 +03:00
|
|
|
struct kref r_kref;
|
2009-10-06 22:31:10 +04:00
|
|
|
bool r_mempool;
|
2017-02-11 20:46:08 +03:00
|
|
|
struct completion r_completion; /* private to osd_client.c */
|
libceph: change how "safe" callback is used
An osd request currently has two callbacks. They inform the
initiator of the request when we've received confirmation for the
target osd that a request was received, and when the osd indicates
all changes described by the request are durable.
The only time the second callback is used is in the ceph file system
for a synchronous write. There's a race that makes some handling of
this case unsafe. This patch addresses this problem. The error
handling for this callback is also kind of gross, and this patch
changes that as well.
In ceph_sync_write(), if a safe callback is requested we want to add
the request on the ceph inode's unsafe items list. Because items on
this list must have their tid set (by ceph_osd_start_request()), the
request added *after* the call to that function returns. The
problem with this is that there's a race between starting the
request and adding it to the unsafe items list; the request may
already be complete before ceph_sync_write() even begins to put it
on the list.
To address this, we change the way the "safe" callback is used.
Rather than just calling it when the request is "safe", we use it to
notify the initiator the bounds (start and end) of the period during
which the request is *unsafe*. So the initiator gets notified just
before the request gets sent to the osd (when it is "unsafe"), and
again when it's known the results are durable (it's no longer
unsafe). The first call will get made in __send_request(), just
before the request message gets sent to the messenger for the first
time. That function is only called by __send_queued(), which is
always called with the osd client's request mutex held.
We then have this callback function insert the request on the ceph
inode's unsafe list when we're told the request is unsafe. This
will avoid the race because this call will be made under protection
of the osd client's request mutex. It also nicely groups the setup
and cleanup of the state associated with managing unsafe requests.
The name of the "safe" callback field is changed to "unsafe" to
better reflect its new purpose. It has a Boolean "unsafe" parameter
to indicate whether the request is becoming unsafe or is now safe.
Because the "msg" parameter wasn't used, we drop that.
This resolves the original problem reportedin:
http://tracker.ceph.com/issues/4706
Reported-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2013-04-15 20:20:42 +04:00
|
|
|
ceph_osdc_callback_t r_callback;
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
struct inode *r_inode; /* for use by callbacks */
|
2019-07-08 13:50:09 +03:00
|
|
|
struct list_head r_private_item; /* ditto */
|
2010-04-07 02:14:15 +04:00
|
|
|
void *r_priv; /* ditto */
|
2009-10-06 22:31:10 +04:00
|
|
|
|
2016-05-26 01:29:52 +03:00
|
|
|
/* set by submitter */
|
|
|
|
u64 r_snapid; /* for reads, CEPH_NOSNAP o/w */
|
|
|
|
struct ceph_snap_context *r_snapc; /* for writes */
|
2018-07-13 23:18:37 +03:00
|
|
|
struct timespec64 r_mtime; /* ditto */
|
2016-05-26 01:29:52 +03:00
|
|
|
u64 r_data_offset; /* ditto */
|
2016-05-26 02:15:02 +03:00
|
|
|
bool r_linger; /* don't resend on failure */
|
2009-10-06 22:31:10 +04:00
|
|
|
|
2016-05-26 01:29:52 +03:00
|
|
|
/* internal */
|
|
|
|
unsigned long r_stamp; /* jiffies, send or check time */
|
2017-02-12 19:11:07 +03:00
|
|
|
unsigned long r_start_stamp; /* jiffies */
|
2016-05-26 01:29:52 +03:00
|
|
|
int r_attempts;
|
2016-04-28 17:07:27 +03:00
|
|
|
u32 r_map_dne_bound;
|
2016-02-09 19:50:15 +03:00
|
|
|
|
|
|
|
struct ceph_osd_req_op r_ops[];
|
2009-10-06 22:31:10 +04:00
|
|
|
};
|
|
|
|
|
libceph: follow redirect replies from osds
Follow redirect replies from osds, for details see ceph.git commit
fbbe3ad1220799b7bb00ea30fce581c5eadaf034.
v1 (current) version of redirect reply consists of oloc and oid, which
expands to pool, key, nspace, hash and oid. However, server-side code
that would populate anything other than pool doesn't exist yet, and
hence this commit adds support for pool redirects only. To make sure
that future server-side updates don't break us, we decode all fields
and, if any of key, nspace, hash or oid have a non-default value, error
out with "corrupt osd_op_reply ..." message.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
2014-01-27 19:40:20 +04:00
|
|
|
struct ceph_request_redirect {
|
|
|
|
struct ceph_object_locator oloc;
|
|
|
|
};
|
|
|
|
|
2017-06-15 17:30:54 +03:00
|
|
|
/*
|
|
|
|
* osd request identifier
|
|
|
|
*
|
|
|
|
* caller name + incarnation# + tid to unique identify this request
|
|
|
|
*/
|
|
|
|
struct ceph_osd_reqid {
|
|
|
|
struct ceph_entity_name name;
|
|
|
|
__le64 tid;
|
|
|
|
__le32 inc;
|
|
|
|
} __packed;
|
|
|
|
|
|
|
|
struct ceph_blkin_trace_info {
|
|
|
|
__le64 trace_id;
|
|
|
|
__le64 span_id;
|
|
|
|
__le64 parent_span_id;
|
|
|
|
} __packed;
|
|
|
|
|
2016-05-26 02:15:02 +03:00
|
|
|
typedef void (*rados_watchcb2_t)(void *arg, u64 notify_id, u64 cookie,
|
|
|
|
u64 notifier_id, void *data, size_t data_len);
|
|
|
|
typedef void (*rados_watcherrcb_t)(void *arg, u64 cookie, int err);
|
|
|
|
|
|
|
|
struct ceph_osd_linger_request {
|
2011-03-22 01:07:16 +03:00
|
|
|
struct ceph_osd_client *osdc;
|
2016-05-26 02:15:02 +03:00
|
|
|
u64 linger_id;
|
|
|
|
bool committed;
|
2016-04-28 17:07:27 +03:00
|
|
|
bool is_watch; /* watch or notify */
|
2016-05-26 02:15:02 +03:00
|
|
|
|
|
|
|
struct ceph_osd *osd;
|
|
|
|
struct ceph_osd_request *reg_req;
|
|
|
|
struct ceph_osd_request *ping_req;
|
|
|
|
unsigned long ping_sent;
|
2016-04-28 17:07:27 +03:00
|
|
|
unsigned long watch_valid_thru;
|
|
|
|
struct list_head pending_lworks;
|
2016-05-26 02:15:02 +03:00
|
|
|
|
|
|
|
struct ceph_osd_request_target t;
|
2016-04-28 17:07:27 +03:00
|
|
|
u32 map_dne_bound;
|
2016-05-26 02:15:02 +03:00
|
|
|
|
2018-07-13 23:18:37 +03:00
|
|
|
struct timespec64 mtime;
|
2016-05-26 02:15:02 +03:00
|
|
|
|
2011-03-22 01:07:16 +03:00
|
|
|
struct kref kref;
|
2016-05-26 02:15:02 +03:00
|
|
|
struct mutex lock;
|
|
|
|
struct rb_node node; /* osd */
|
|
|
|
struct rb_node osdc_node; /* osdc */
|
2016-04-28 17:07:27 +03:00
|
|
|
struct rb_node mc_node; /* map check */
|
2016-05-26 02:15:02 +03:00
|
|
|
struct list_head scan_item;
|
|
|
|
|
|
|
|
struct completion reg_commit_wait;
|
2016-04-28 17:07:27 +03:00
|
|
|
struct completion notify_finish_wait;
|
2016-05-26 02:15:02 +03:00
|
|
|
int reg_commit_error;
|
2016-04-28 17:07:27 +03:00
|
|
|
int notify_finish_error;
|
2016-05-26 02:15:02 +03:00
|
|
|
int last_error;
|
|
|
|
|
|
|
|
u32 register_gen;
|
2016-04-28 17:07:27 +03:00
|
|
|
u64 notify_id;
|
2011-03-22 01:07:16 +03:00
|
|
|
|
2016-05-26 02:15:02 +03:00
|
|
|
rados_watchcb2_t wcb;
|
|
|
|
rados_watcherrcb_t errcb;
|
|
|
|
void *data;
|
2016-04-28 17:07:27 +03:00
|
|
|
|
|
|
|
struct page ***preply_pages;
|
|
|
|
size_t *preply_len;
|
2011-03-22 01:07:16 +03:00
|
|
|
};
|
|
|
|
|
2015-07-17 23:18:07 +03:00
|
|
|
struct ceph_watch_item {
|
|
|
|
struct ceph_entity_name name;
|
|
|
|
u64 cookie;
|
|
|
|
struct ceph_entity_addr addr;
|
|
|
|
};
|
|
|
|
|
2017-06-19 13:18:05 +03:00
|
|
|
struct ceph_spg_mapping {
|
|
|
|
struct rb_node node;
|
|
|
|
struct ceph_spg spgid;
|
|
|
|
|
|
|
|
struct rb_root backoffs;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ceph_hobject_id {
|
|
|
|
void *key;
|
|
|
|
size_t key_len;
|
|
|
|
void *oid;
|
|
|
|
size_t oid_len;
|
|
|
|
u64 snapid;
|
|
|
|
u32 hash;
|
|
|
|
u8 is_max;
|
|
|
|
void *nspace;
|
|
|
|
size_t nspace_len;
|
|
|
|
s64 pool;
|
|
|
|
|
|
|
|
/* cache */
|
|
|
|
u32 hash_reverse_bits;
|
|
|
|
};
|
|
|
|
|
|
|
|
static inline void ceph_hoid_build_hash_cache(struct ceph_hobject_id *hoid)
|
|
|
|
{
|
|
|
|
hoid->hash_reverse_bits = bitrev32(hoid->hash);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* PG-wide backoff: [begin, end)
|
|
|
|
* per-object backoff: begin == end
|
|
|
|
*/
|
|
|
|
struct ceph_osd_backoff {
|
|
|
|
struct rb_node spg_node;
|
|
|
|
struct rb_node id_node;
|
|
|
|
|
|
|
|
struct ceph_spg spgid;
|
|
|
|
u64 id;
|
|
|
|
struct ceph_hobject_id *begin;
|
|
|
|
struct ceph_hobject_id *end;
|
|
|
|
};
|
|
|
|
|
2016-11-08 17:15:24 +03:00
|
|
|
#define CEPH_LINGER_ID_START 0xffff000000000000ULL
|
|
|
|
|
2009-10-06 22:31:10 +04:00
|
|
|
struct ceph_osd_client {
|
|
|
|
struct ceph_client *client;
|
|
|
|
|
|
|
|
struct ceph_osdmap *osdmap; /* current map */
|
2016-04-28 17:07:26 +03:00
|
|
|
struct rw_semaphore lock;
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
struct rb_root osds; /* osds */
|
2010-02-03 22:00:26 +03:00
|
|
|
struct list_head osd_lru; /* idle osds */
|
2016-04-28 17:07:26 +03:00
|
|
|
spinlock_t osd_lru_lock;
|
2017-04-18 16:21:16 +03:00
|
|
|
u32 epoch_barrier;
|
2016-04-28 17:07:26 +03:00
|
|
|
struct ceph_osd homeless_osd;
|
|
|
|
atomic64_t last_tid; /* tid of last request */
|
2016-05-26 02:15:02 +03:00
|
|
|
u64 last_linger_id;
|
|
|
|
struct rb_root linger_requests; /* lingering requests */
|
2016-04-28 17:07:27 +03:00
|
|
|
struct rb_root map_checks;
|
|
|
|
struct rb_root linger_map_checks;
|
2016-04-28 17:07:26 +03:00
|
|
|
atomic_t num_requests;
|
|
|
|
atomic_t num_homeless;
|
2018-05-15 16:47:58 +03:00
|
|
|
int abort_err;
|
2009-10-06 22:31:10 +04:00
|
|
|
struct delayed_work timeout_work;
|
2010-02-03 22:00:26 +03:00
|
|
|
struct delayed_work osds_timeout_work;
|
2009-11-13 02:05:52 +03:00
|
|
|
#ifdef CONFIG_DEBUG_FS
|
2009-10-06 22:31:10 +04:00
|
|
|
struct dentry *debugfs_file;
|
2009-11-13 02:05:52 +03:00
|
|
|
#endif
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
mempool_t *req_mempool;
|
|
|
|
|
2010-01-14 04:03:23 +03:00
|
|
|
struct ceph_msgpool msgpool_op;
|
2010-03-02 00:02:00 +03:00
|
|
|
struct ceph_msgpool msgpool_op_reply;
|
2011-03-22 01:07:16 +03:00
|
|
|
|
|
|
|
struct workqueue_struct *notify_wq;
|
2018-05-21 17:00:29 +03:00
|
|
|
struct workqueue_struct *completion_wq;
|
2009-10-06 22:31:10 +04:00
|
|
|
};
|
|
|
|
|
2016-04-28 17:07:25 +03:00
|
|
|
static inline bool ceph_osdmap_flag(struct ceph_osd_client *osdc, int flag)
|
|
|
|
{
|
|
|
|
return osdc->osdmap->flags & flag;
|
|
|
|
}
|
|
|
|
|
2013-05-01 21:43:04 +04:00
|
|
|
extern int ceph_osdc_setup(void);
|
|
|
|
extern void ceph_osdc_cleanup(void);
|
|
|
|
|
2009-10-06 22:31:10 +04:00
|
|
|
extern int ceph_osdc_init(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_client *client);
|
|
|
|
extern void ceph_osdc_stop(struct ceph_osd_client *osdc);
|
2019-07-25 15:16:39 +03:00
|
|
|
extern void ceph_osdc_reopen_osds(struct ceph_osd_client *osdc);
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
extern void ceph_osdc_handle_reply(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_msg *msg);
|
|
|
|
extern void ceph_osdc_handle_map(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_msg *msg);
|
2017-04-18 16:21:16 +03:00
|
|
|
void ceph_osdc_update_epoch_barrier(struct ceph_osd_client *osdc, u32 eb);
|
2018-05-15 16:47:58 +03:00
|
|
|
void ceph_osdc_abort_requests(struct ceph_osd_client *osdc, int err);
|
2019-07-25 15:16:40 +03:00
|
|
|
void ceph_osdc_clear_abort_err(struct ceph_osd_client *osdc);
|
2009-10-06 22:31:10 +04:00
|
|
|
|
2019-06-14 19:00:19 +03:00
|
|
|
#define osd_req_op_data(oreq, whch, typ, fld) \
|
|
|
|
({ \
|
|
|
|
struct ceph_osd_request *__oreq = (oreq); \
|
|
|
|
unsigned int __whch = (whch); \
|
|
|
|
BUG_ON(__whch >= __oreq->r_num_ops); \
|
|
|
|
&__oreq->r_ops[__whch].typ.fld; \
|
|
|
|
})
|
|
|
|
|
2013-02-11 22:33:24 +04:00
|
|
|
extern void osd_req_op_init(struct ceph_osd_request *osd_req,
|
2015-04-27 06:09:54 +03:00
|
|
|
unsigned int which, u16 opcode, u32 flags);
|
2013-02-11 22:33:24 +04:00
|
|
|
|
|
|
|
extern void osd_req_op_raw_data_in_pages(struct ceph_osd_request *,
|
|
|
|
unsigned int which,
|
|
|
|
struct page **pages, u64 length,
|
|
|
|
u32 alignment, bool pages_from_pool,
|
|
|
|
bool own_pages);
|
|
|
|
|
2013-04-05 10:27:11 +04:00
|
|
|
extern void osd_req_op_extent_init(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, u16 opcode,
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 05:50:00 +04:00
|
|
|
u64 offset, u64 length,
|
|
|
|
u64 truncate_size, u32 truncate_seq);
|
2013-04-05 10:27:11 +04:00
|
|
|
extern void osd_req_op_extent_update(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, u64 length);
|
2016-01-07 12:32:54 +03:00
|
|
|
extern void osd_req_op_extent_dup_last(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which, u64 offset_inc);
|
2013-04-05 10:27:12 +04:00
|
|
|
|
|
|
|
extern struct ceph_osd_data *osd_req_op_extent_osd_data(
|
|
|
|
struct ceph_osd_request *osd_req,
|
2013-04-15 23:50:36 +04:00
|
|
|
unsigned int which);
|
2013-04-05 10:27:12 +04:00
|
|
|
|
|
|
|
extern void osd_req_op_extent_osd_data_pages(struct ceph_osd_request *,
|
2013-04-15 23:50:36 +04:00
|
|
|
unsigned int which,
|
2013-04-05 10:27:12 +04:00
|
|
|
struct page **pages, u64 length,
|
|
|
|
u32 alignment, bool pages_from_pool,
|
|
|
|
bool own_pages);
|
|
|
|
extern void osd_req_op_extent_osd_data_pagelist(struct ceph_osd_request *,
|
2013-04-15 23:50:36 +04:00
|
|
|
unsigned int which,
|
2013-04-05 10:27:12 +04:00
|
|
|
struct ceph_pagelist *pagelist);
|
|
|
|
#ifdef CONFIG_BLOCK
|
2018-01-20 12:30:10 +03:00
|
|
|
void osd_req_op_extent_osd_data_bio(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
|
|
|
struct ceph_bio_iter *bio_pos,
|
|
|
|
u32 bio_length);
|
2013-04-05 10:27:12 +04:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2018-05-04 17:57:30 +03:00
|
|
|
void osd_req_op_extent_osd_data_bvecs(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
|
|
|
struct bio_vec *bvecs, u32 num_bvecs,
|
|
|
|
u32 bytes);
|
2018-01-20 12:30:11 +03:00
|
|
|
void osd_req_op_extent_osd_data_bvec_pos(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
|
|
|
struct ceph_bvec_iter *bvec_pos);
|
2013-04-05 10:27:12 +04:00
|
|
|
|
2013-04-05 23:46:02 +04:00
|
|
|
extern void osd_req_op_cls_request_data_pagelist(struct ceph_osd_request *,
|
|
|
|
unsigned int which,
|
|
|
|
struct ceph_pagelist *pagelist);
|
2013-04-20 00:34:49 +04:00
|
|
|
extern void osd_req_op_cls_request_data_pages(struct ceph_osd_request *,
|
|
|
|
unsigned int which,
|
|
|
|
struct page **pages, u64 length,
|
|
|
|
u32 alignment, bool pages_from_pool,
|
|
|
|
bool own_pages);
|
2018-01-20 12:30:11 +03:00
|
|
|
void osd_req_op_cls_request_data_bvecs(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
2018-05-04 17:57:30 +03:00
|
|
|
struct bio_vec *bvecs, u32 num_bvecs,
|
|
|
|
u32 bytes);
|
2013-04-05 10:27:12 +04:00
|
|
|
extern void osd_req_op_cls_response_data_pages(struct ceph_osd_request *,
|
2013-04-05 10:27:11 +04:00
|
|
|
unsigned int which,
|
2013-04-05 10:27:12 +04:00
|
|
|
struct page **pages, u64 length,
|
|
|
|
u32 alignment, bool pages_from_pool,
|
|
|
|
bool own_pages);
|
2018-09-26 20:12:07 +03:00
|
|
|
int osd_req_op_cls_init(struct ceph_osd_request *osd_req, unsigned int which,
|
|
|
|
const char *class, const char *method);
|
2014-11-12 09:00:43 +03:00
|
|
|
extern int osd_req_op_xattr_init(struct ceph_osd_request *osd_req, unsigned int which,
|
|
|
|
u16 opcode, const char *name, const void *value,
|
|
|
|
size_t size, u8 cmp_op, u8 cmp_mode);
|
2014-02-25 18:22:27 +04:00
|
|
|
extern void osd_req_op_alloc_hint_init(struct ceph_osd_request *osd_req,
|
|
|
|
unsigned int which,
|
|
|
|
u64 expected_object_size,
|
|
|
|
u64 expected_write_size);
|
libceph: define source request op functions
The rbd code has a function that allocates and populates a
ceph_osd_req_op structure (the in-core version of an osd request
operation). When reviewed, Josh suggested two things: that the
big varargs function might be better split into type-specific
functions; and that this functionality really belongs in the osd
client rather than rbd.
This patch implements both of Josh's suggestions. It breaks
up the rbd function into separate functions and defines them
in the osd client module as exported interfaces. Unlike the
rbd version, however, the functions don't allocate an osd_req_op
structure; they are provided the address of one and that is
initialized instead.
The rbd function has been eliminated and calls to it have been
replaced by calls to the new routines. The rbd code now now use a
stack (struct) variable to hold the op rather than allocating and
freeing it each time.
For now only the capabilities used by rbd are implemented.
Implementing all the other osd op types, and making the rest of the
code use it will be done separately, in the next few patches.
Note that only the extent, cls, and watch portions of the
ceph_osd_req_op structure are currently used. Delete the others
(xattr, pgls, and snap) from its definition so nobody thinks it's
actually implemented or needed. We can add it back again later
if needed, when we know it's been tested.
This (and a few follow-on patches) resolves:
http://tracker.ceph.com/issues/3861
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-14 05:50:00 +04:00
|
|
|
|
2010-04-07 01:51:47 +04:00
|
|
|
extern struct ceph_osd_request *ceph_osdc_alloc_request(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_snap_context *snapc,
|
2013-03-14 23:09:05 +04:00
|
|
|
unsigned int num_ops,
|
2010-04-07 01:51:47 +04:00
|
|
|
bool use_mempool,
|
2012-11-14 07:11:15 +04:00
|
|
|
gfp_t gfp_flags);
|
2016-04-27 15:15:51 +03:00
|
|
|
int ceph_osdc_alloc_messages(struct ceph_osd_request *req, gfp_t gfp);
|
2010-04-07 01:51:47 +04:00
|
|
|
|
2009-10-06 22:31:10 +04:00
|
|
|
extern struct ceph_osd_request *ceph_osdc_new_request(struct ceph_osd_client *,
|
|
|
|
struct ceph_file_layout *layout,
|
|
|
|
struct ceph_vino vino,
|
2013-03-14 23:09:05 +04:00
|
|
|
u64 offset, u64 *len,
|
2014-11-13 09:40:37 +03:00
|
|
|
unsigned int which, int num_ops,
|
|
|
|
int opcode, int flags,
|
2009-10-06 22:31:10 +04:00
|
|
|
struct ceph_snap_context *snapc,
|
2013-03-14 23:09:05 +04:00
|
|
|
u32 truncate_seq, u64 truncate_size,
|
libceph: don't assign page info in ceph_osdc_new_request()
Currently ceph_osdc_new_request() assigns an osd request's
r_num_pages and r_alignment fields. The only thing it does
after that is call ceph_osdc_build_request(), and that doesn't
need those fields to be assigned.
Move the assignment of those fields out of ceph_osdc_new_request()
and into its caller. As a result, the page_align parameter is no
longer used, so get rid of it.
Note that in ceph_sync_write(), the value for req->r_num_pages had
already been calculated earlier (as num_pages, and fortunately
it was computed the same way). So don't bother recomputing it,
but because it's not needed earlier, move that calculation after the
call to ceph_osdc_new_request(). Hold off making the assignment to
r_alignment, doing it instead r_pages and r_num_pages are
getting set.
Similarly, in start_read(), nr_pages already holds the number of
pages in the array (and is calculated the same way), so there's no
need to recompute it. Move the assignment of the page alignment
down with the others there as well.
This and the next few patches are preparation work for:
http://tracker.ceph.com/issues/4127
Signed-off-by: Alex Elder <elder@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
2013-03-02 04:00:15 +04:00
|
|
|
bool use_mempool);
|
2009-10-06 22:31:10 +04:00
|
|
|
|
2014-06-20 14:14:42 +04:00
|
|
|
extern void ceph_osdc_get_request(struct ceph_osd_request *req);
|
|
|
|
extern void ceph_osdc_put_request(struct ceph_osd_request *req);
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
extern int ceph_osdc_start_request(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_request *req,
|
|
|
|
bool nofail);
|
2014-06-19 11:38:13 +04:00
|
|
|
extern void ceph_osdc_cancel_request(struct ceph_osd_request *req);
|
2009-10-06 22:31:10 +04:00
|
|
|
extern int ceph_osdc_wait_request(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_request *req);
|
|
|
|
extern void ceph_osdc_sync(struct ceph_osd_client *osdc);
|
|
|
|
|
2013-08-29 08:43:09 +04:00
|
|
|
extern void ceph_osdc_flush_notifies(struct ceph_osd_client *osdc);
|
2016-04-28 17:07:28 +03:00
|
|
|
void ceph_osdc_maybe_request_map(struct ceph_osd_client *osdc);
|
2013-08-29 08:43:09 +04:00
|
|
|
|
2015-06-17 21:49:45 +03:00
|
|
|
int ceph_osdc_call(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
const char *class, const char *method,
|
|
|
|
unsigned int flags,
|
|
|
|
struct page *req_page, size_t req_len,
|
2019-06-14 19:16:51 +03:00
|
|
|
struct page **resp_pages, size_t *resp_len);
|
2015-06-17 21:49:45 +03:00
|
|
|
|
2009-10-06 22:31:10 +04:00
|
|
|
extern int ceph_osdc_readpages(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_vino vino,
|
|
|
|
struct ceph_file_layout *layout,
|
|
|
|
u64 off, u64 *plen,
|
|
|
|
u32 truncate_seq, u64 truncate_size,
|
2010-11-09 23:43:12 +03:00
|
|
|
struct page **pages, int nr_pages,
|
|
|
|
int page_align);
|
2009-10-06 22:31:10 +04:00
|
|
|
|
|
|
|
extern int ceph_osdc_writepages(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_vino vino,
|
|
|
|
struct ceph_file_layout *layout,
|
|
|
|
struct ceph_snap_context *sc,
|
|
|
|
u64 off, u64 len,
|
|
|
|
u32 truncate_seq, u64 truncate_size,
|
2018-07-13 23:18:37 +03:00
|
|
|
struct timespec64 *mtime,
|
2013-02-15 21:42:29 +04:00
|
|
|
struct page **pages, int nr_pages);
|
2009-10-06 22:31:10 +04:00
|
|
|
|
2018-10-15 18:45:58 +03:00
|
|
|
int ceph_osdc_copy_from(struct ceph_osd_client *osdc,
|
|
|
|
u64 src_snapid, u64 src_version,
|
|
|
|
struct ceph_object_id *src_oid,
|
|
|
|
struct ceph_object_locator *src_oloc,
|
|
|
|
u32 src_fadvise_flags,
|
|
|
|
struct ceph_object_id *dst_oid,
|
|
|
|
struct ceph_object_locator *dst_oloc,
|
|
|
|
u32 dst_fadvise_flags,
|
|
|
|
u8 copy_from_flags);
|
|
|
|
|
2016-05-26 02:15:02 +03:00
|
|
|
/* watch/notify */
|
|
|
|
struct ceph_osd_linger_request *
|
|
|
|
ceph_osdc_watch(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
rados_watchcb2_t wcb,
|
|
|
|
rados_watcherrcb_t errcb,
|
|
|
|
void *data);
|
|
|
|
int ceph_osdc_unwatch(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_linger_request *lreq);
|
|
|
|
|
|
|
|
int ceph_osdc_notify_ack(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
u64 notify_id,
|
|
|
|
u64 cookie,
|
|
|
|
void *payload,
|
2018-06-25 18:26:55 +03:00
|
|
|
u32 payload_len);
|
2016-04-28 17:07:27 +03:00
|
|
|
int ceph_osdc_notify(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
void *payload,
|
2018-06-25 18:26:55 +03:00
|
|
|
u32 payload_len,
|
2016-04-28 17:07:27 +03:00
|
|
|
u32 timeout,
|
|
|
|
struct page ***preply_pages,
|
|
|
|
size_t *preply_len);
|
2016-04-28 17:07:27 +03:00
|
|
|
int ceph_osdc_watch_check(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_osd_linger_request *lreq);
|
2015-07-17 23:18:07 +03:00
|
|
|
int ceph_osdc_list_watchers(struct ceph_osd_client *osdc,
|
|
|
|
struct ceph_object_id *oid,
|
|
|
|
struct ceph_object_locator *oloc,
|
|
|
|
struct ceph_watch_item **watchers,
|
|
|
|
u32 *num_watchers);
|
2009-10-06 22:31:10 +04:00
|
|
|
#endif
|
|
|
|
|