2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* raid1.c : Multiple Devices driver for Linux
|
|
|
|
*
|
|
|
|
* Copyright (C) 1999, 2000, 2001 Ingo Molnar, Red Hat
|
|
|
|
*
|
|
|
|
* Copyright (C) 1996, 1997, 1998 Ingo Molnar, Miguel de Icaza, Gadi Oxman
|
|
|
|
*
|
|
|
|
* RAID-1 management functions.
|
|
|
|
*
|
|
|
|
* Better read-balancing code written by Mika Kuoppala <miku@iki.fi>, 2000
|
|
|
|
*
|
2007-10-20 01:21:04 +04:00
|
|
|
* Fixes to reconstruction by Jakob Østergaard" <jakob@ostenfeld.dk>
|
2005-04-17 02:20:36 +04:00
|
|
|
* Various fixes by Neil Brown <neilb@cse.unsw.edu.au>
|
|
|
|
*
|
2005-06-22 04:17:23 +04:00
|
|
|
* Changes by Peter T. Breuer <ptb@it.uc3m.es> 31/1/2003 to support
|
|
|
|
* bitmapped intelligence in resync:
|
|
|
|
*
|
|
|
|
* - bitmap marked during normal i/o
|
|
|
|
* - bitmap used to skip nondirty blocks during sync
|
|
|
|
*
|
|
|
|
* Additions to bitmap code, (C) 2003-2004 Paul Clements, SteelEye Technology:
|
|
|
|
* - persistent bitmap code
|
|
|
|
*
|
2005-04-17 02:20:36 +04:00
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation; either version 2, or (at your option)
|
|
|
|
* any later version.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* (for example /usr/src/linux/COPYING); if not, write to the Free
|
|
|
|
* Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
|
|
|
*/
|
|
|
|
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
|
|
|
#include <linux/slab.h>
|
2008-10-15 02:09:21 +04:00
|
|
|
#include <linux/delay.h>
|
2009-03-31 07:33:13 +04:00
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/seq_file.h>
|
2011-07-27 05:00:36 +04:00
|
|
|
#include <linux/ratelimit.h>
|
2009-03-31 07:33:13 +04:00
|
|
|
#include "md.h"
|
2009-03-31 07:27:03 +04:00
|
|
|
#include "raid1.h"
|
|
|
|
#include "bitmap.h"
|
2005-06-22 04:17:23 +04:00
|
|
|
|
|
|
|
#define DEBUG 0
|
|
|
|
#if DEBUG
|
|
|
|
#define PRINTK(x...) printk(x)
|
|
|
|
#else
|
|
|
|
#define PRINTK(x...)
|
|
|
|
#endif
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Number of guaranteed r1bios in case of extreme VM load:
|
|
|
|
*/
|
|
|
|
#define NR_RAID1_BIOS 256
|
|
|
|
|
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
static void allow_barrier(conf_t *conf);
|
|
|
|
static void lower_barrier(conf_t *conf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-10-07 10:46:04 +04:00
|
|
|
static void * r1bio_pool_alloc(gfp_t gfp_flags, void *data)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct pool_info *pi = data;
|
|
|
|
int size = offsetof(r1bio_t, bios[pi->raid_disks]);
|
|
|
|
|
|
|
|
/* allocate a r1bio with room for raid_disks entries in the bios array */
|
2011-03-10 10:52:07 +03:00
|
|
|
return kzalloc(size, gfp_flags);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void r1bio_pool_free(void *r1_bio, void *data)
|
|
|
|
{
|
|
|
|
kfree(r1_bio);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define RESYNC_BLOCK_SIZE (64*1024)
|
|
|
|
//#define RESYNC_BLOCK_SIZE PAGE_SIZE
|
|
|
|
#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9)
|
|
|
|
#define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE)
|
|
|
|
#define RESYNC_WINDOW (2048*1024)
|
|
|
|
|
2005-10-07 10:46:04 +04:00
|
|
|
static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
struct pool_info *pi = data;
|
|
|
|
struct page *page;
|
|
|
|
r1bio_t *r1_bio;
|
|
|
|
struct bio *bio;
|
|
|
|
int i, j;
|
|
|
|
|
|
|
|
r1_bio = r1bio_pool_alloc(gfp_flags, pi);
|
2011-03-10 10:52:07 +03:00
|
|
|
if (!r1_bio)
|
2005-04-17 02:20:36 +04:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate bios : 1 for reading, n-1 for writing
|
|
|
|
*/
|
|
|
|
for (j = pi->raid_disks ; j-- ; ) {
|
2010-10-26 10:33:54 +04:00
|
|
|
bio = bio_kmalloc(gfp_flags, RESYNC_PAGES);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!bio)
|
|
|
|
goto out_free_bio;
|
|
|
|
r1_bio->bios[j] = bio;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Allocate RESYNC_PAGES data pages and attach them to
|
2006-01-06 11:20:26 +03:00
|
|
|
* the first bio.
|
|
|
|
* If this is a user-requested check/repair, allocate
|
|
|
|
* RESYNC_PAGES for each bio.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2006-01-06 11:20:26 +03:00
|
|
|
if (test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery))
|
|
|
|
j = pi->raid_disks;
|
|
|
|
else
|
|
|
|
j = 1;
|
|
|
|
while(j--) {
|
|
|
|
bio = r1_bio->bios[j];
|
|
|
|
for (i = 0; i < RESYNC_PAGES; i++) {
|
|
|
|
page = alloc_page(gfp_flags);
|
|
|
|
if (unlikely(!page))
|
|
|
|
goto out_free_pages;
|
|
|
|
|
|
|
|
bio->bi_io_vec[i].bv_page = page;
|
2009-04-06 08:40:38 +04:00
|
|
|
bio->bi_vcnt = i+1;
|
2006-01-06 11:20:26 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/* If not user-requests, copy the page pointers to all bios */
|
|
|
|
if (!test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery)) {
|
|
|
|
for (i=0; i<RESYNC_PAGES ; i++)
|
|
|
|
for (j=1; j<pi->raid_disks; j++)
|
|
|
|
r1_bio->bios[j]->bi_io_vec[i].bv_page =
|
|
|
|
r1_bio->bios[0]->bi_io_vec[i].bv_page;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
r1_bio->master_bio = NULL;
|
|
|
|
|
|
|
|
return r1_bio;
|
|
|
|
|
|
|
|
out_free_pages:
|
2009-04-06 08:40:38 +04:00
|
|
|
for (j=0 ; j < pi->raid_disks; j++)
|
|
|
|
for (i=0; i < r1_bio->bios[j]->bi_vcnt ; i++)
|
|
|
|
put_page(r1_bio->bios[j]->bi_io_vec[i].bv_page);
|
2006-01-06 11:20:26 +03:00
|
|
|
j = -1;
|
2005-04-17 02:20:36 +04:00
|
|
|
out_free_bio:
|
|
|
|
while ( ++j < pi->raid_disks )
|
|
|
|
bio_put(r1_bio->bios[j]);
|
|
|
|
r1bio_pool_free(r1_bio, data);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void r1buf_pool_free(void *__r1_bio, void *data)
|
|
|
|
{
|
|
|
|
struct pool_info *pi = data;
|
2006-01-06 11:20:26 +03:00
|
|
|
int i,j;
|
2005-04-17 02:20:36 +04:00
|
|
|
r1bio_t *r1bio = __r1_bio;
|
|
|
|
|
2006-01-06 11:20:26 +03:00
|
|
|
for (i = 0; i < RESYNC_PAGES; i++)
|
|
|
|
for (j = pi->raid_disks; j-- ;) {
|
|
|
|
if (j == 0 ||
|
|
|
|
r1bio->bios[j]->bi_io_vec[i].bv_page !=
|
|
|
|
r1bio->bios[0]->bi_io_vec[i].bv_page)
|
2006-01-06 11:20:40 +03:00
|
|
|
safe_put_page(r1bio->bios[j]->bi_io_vec[i].bv_page);
|
2006-01-06 11:20:26 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
for (i=0 ; i < pi->raid_disks; i++)
|
|
|
|
bio_put(r1bio->bios[i]);
|
|
|
|
|
|
|
|
r1bio_pool_free(r1bio, data);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void put_all_bios(conf_t *conf, r1bio_t *r1_bio)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
|
|
|
struct bio **bio = r1_bio->bios + i;
|
2006-01-06 11:20:23 +03:00
|
|
|
if (*bio && *bio != IO_BLOCKED)
|
2005-04-17 02:20:36 +04:00
|
|
|
bio_put(*bio);
|
|
|
|
*bio = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-01-15 00:20:43 +03:00
|
|
|
static void free_r1bio(r1bio_t *r1_bio)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = r1_bio->mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Wake up any possible resync thread that waits for the device
|
|
|
|
* to go idle.
|
|
|
|
*/
|
2006-01-06 11:20:12 +03:00
|
|
|
allow_barrier(conf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
put_all_bios(conf, r1_bio);
|
|
|
|
mempool_free(r1_bio, conf->r1bio_pool);
|
|
|
|
}
|
|
|
|
|
2006-01-15 00:20:43 +03:00
|
|
|
static void put_buf(r1bio_t *r1_bio)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = r1_bio->mddev->private;
|
2006-01-06 11:20:21 +03:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i=0; i<conf->raid_disks; i++) {
|
|
|
|
struct bio *bio = r1_bio->bios[i];
|
|
|
|
if (bio->bi_end_io)
|
|
|
|
rdev_dec_pending(conf->mirrors[i].rdev, r1_bio->mddev);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
mempool_free(r1_bio, conf->r1buf_pool);
|
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
lower_barrier(conf);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void reschedule_retry(r1bio_t *r1_bio)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
mddev_t *mddev = r1_bio->mddev;
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
list_add(&r1_bio->retry_list, &conf->retry_list);
|
2006-01-06 11:20:19 +03:00
|
|
|
conf->nr_queued ++;
|
2005-04-17 02:20:36 +04:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
wake_up(&conf->wait_barrier);
|
2005-04-17 02:20:36 +04:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* raid_end_bio_io() is called when we have finished servicing a mirrored
|
|
|
|
* operation and are ready to return a success/failure code to the buffer
|
|
|
|
* cache layer.
|
|
|
|
*/
|
|
|
|
static void raid_end_bio_io(r1bio_t *r1_bio)
|
|
|
|
{
|
|
|
|
struct bio *bio = r1_bio->master_bio;
|
|
|
|
|
2005-09-10 03:23:47 +04:00
|
|
|
/* if nobody has done the final endio yet, do it now */
|
|
|
|
if (!test_and_set_bit(R1BIO_Returned, &r1_bio->state)) {
|
|
|
|
PRINTK(KERN_DEBUG "raid1: sync end %s on sectors %llu-%llu\n",
|
|
|
|
(bio_data_dir(bio) == WRITE) ? "write" : "read",
|
|
|
|
(unsigned long long) bio->bi_sector,
|
|
|
|
(unsigned long long) bio->bi_sector +
|
|
|
|
(bio->bi_size >> 9) - 1);
|
|
|
|
|
2007-09-27 14:47:43 +04:00
|
|
|
bio_endio(bio,
|
2005-09-10 03:23:47 +04:00
|
|
|
test_bit(R1BIO_Uptodate, &r1_bio->state) ? 0 : -EIO);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
free_r1bio(r1_bio);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update disk head position estimator based on IRQ completion info.
|
|
|
|
*/
|
|
|
|
static inline void update_head_pos(int disk, r1bio_t *r1_bio)
|
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = r1_bio->mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
conf->mirrors[disk].head_position =
|
|
|
|
r1_bio->sector + (r1_bio->sectors);
|
|
|
|
}
|
|
|
|
|
2007-09-27 14:47:43 +04:00
|
|
|
static void raid1_end_read_request(struct bio *bio, int error)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
|
2010-03-08 08:02:40 +03:00
|
|
|
r1bio_t *r1_bio = bio->bi_private;
|
2005-04-17 02:20:36 +04:00
|
|
|
int mirror;
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = r1_bio->mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
mirror = r1_bio->read_disk;
|
|
|
|
/*
|
|
|
|
* this branch is our 'one mirror IO has finished' event handler:
|
|
|
|
*/
|
2006-01-06 11:20:19 +03:00
|
|
|
update_head_pos(mirror, r1_bio);
|
|
|
|
|
2007-05-10 14:15:50 +04:00
|
|
|
if (uptodate)
|
|
|
|
set_bit(R1BIO_Uptodate, &r1_bio->state);
|
|
|
|
else {
|
|
|
|
/* If all other devices have failed, we want to return
|
|
|
|
* the error upwards rather than fail the last device.
|
|
|
|
* Here we redefine "uptodate" to mean "Don't want to retry"
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2007-05-10 14:15:50 +04:00
|
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
if (r1_bio->mddev->degraded == conf->raid_disks ||
|
|
|
|
(r1_bio->mddev->degraded == conf->raid_disks-1 &&
|
|
|
|
!test_bit(Faulty, &conf->mirrors[mirror].rdev->flags)))
|
|
|
|
uptodate = 1;
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2007-05-10 14:15:50 +04:00
|
|
|
if (uptodate)
|
2005-04-17 02:20:36 +04:00
|
|
|
raid_end_bio_io(r1_bio);
|
2007-05-10 14:15:50 +04:00
|
|
|
else {
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* oops, read error:
|
|
|
|
*/
|
|
|
|
char b[BDEVNAME_SIZE];
|
2011-07-27 05:00:36 +04:00
|
|
|
printk_ratelimited(
|
|
|
|
KERN_ERR "md/raid1:%s: %s: "
|
|
|
|
"rescheduling sector %llu\n",
|
|
|
|
mdname(conf->mddev),
|
|
|
|
bdevname(conf->mirrors[mirror].rdev->bdev,
|
|
|
|
b),
|
|
|
|
(unsigned long long)r1_bio->sector);
|
2005-04-17 02:20:36 +04:00
|
|
|
reschedule_retry(r1_bio);
|
|
|
|
}
|
|
|
|
|
|
|
|
rdev_dec_pending(conf->mirrors[mirror].rdev, conf->mddev);
|
|
|
|
}
|
|
|
|
|
2011-05-11 08:51:19 +04:00
|
|
|
static void r1_bio_write_done(r1bio_t *r1_bio)
|
2010-10-19 05:54:01 +04:00
|
|
|
{
|
|
|
|
if (atomic_dec_and_test(&r1_bio->remaining))
|
|
|
|
{
|
|
|
|
/* it really is the end of this request */
|
|
|
|
if (test_bit(R1BIO_BehindIO, &r1_bio->state)) {
|
|
|
|
/* free extra copy of the data pages */
|
2011-05-11 08:51:19 +04:00
|
|
|
int i = r1_bio->behind_page_count;
|
2010-10-19 05:54:01 +04:00
|
|
|
while (i--)
|
2011-05-11 08:51:19 +04:00
|
|
|
safe_put_page(r1_bio->behind_pages[i]);
|
|
|
|
kfree(r1_bio->behind_pages);
|
|
|
|
r1_bio->behind_pages = NULL;
|
2010-10-19 05:54:01 +04:00
|
|
|
}
|
|
|
|
/* clear the bitmap if all writes complete successfully */
|
|
|
|
bitmap_endwrite(r1_bio->mddev->bitmap, r1_bio->sector,
|
|
|
|
r1_bio->sectors,
|
|
|
|
!test_bit(R1BIO_Degraded, &r1_bio->state),
|
2011-05-11 08:51:19 +04:00
|
|
|
test_bit(R1BIO_BehindIO, &r1_bio->state));
|
2010-10-19 05:54:01 +04:00
|
|
|
md_write_end(r1_bio->mddev);
|
|
|
|
raid_end_bio_io(r1_bio);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-09-27 14:47:43 +04:00
|
|
|
static void raid1_end_write_request(struct bio *bio, int error)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
|
2010-03-08 08:02:40 +03:00
|
|
|
r1bio_t *r1_bio = bio->bi_private;
|
[PATCH] md: support BIO_RW_BARRIER for md/raid1
We can only accept BARRIER requests if all slaves handle
barriers, and that can, of course, change with time....
So we keep track of whether the whole array seems safe for barriers,
and also whether each individual rdev handles barriers.
We initially assumes barriers are OK.
When writing the superblock we try a barrier, and if that fails, we flag
things for no-barriers. This will usually clear the flags fairly quickly.
If writing the superblock finds that BIO_RW_BARRIER is -ENOTSUPP, we need to
resubmit, so introduce function "md_super_wait" which waits for requests to
finish, and retries ENOTSUPP requests without the barrier flag.
When writing the real raid1, write requests which were BIO_RW_BARRIER but
which aresn't supported need to be retried. So raid1d is enhanced to do this,
and when any bio write completes (i.e. no retry needed) we remove it from the
r1bio, so that devices needing retry are easy to find.
We should hardly ever get -ENOTSUPP errors when writing data to the raid.
It should only happen if:
1/ the device used to support BARRIER, but now doesn't. Few devices
change like this, though raid1 can!
or
2/ the array has no persistent superblock, so there was no opportunity to
pre-test for barriers when writing the superblock.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 08:39:34 +03:00
|
|
|
int mirror, behind = test_bit(R1BIO_BehindIO, &r1_bio->state);
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = r1_bio->mddev->private;
|
2006-03-10 04:33:46 +03:00
|
|
|
struct bio *to_put = NULL;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
|
|
|
|
for (mirror = 0; mirror < conf->raid_disks; mirror++)
|
|
|
|
if (r1_bio->bios[mirror] == bio)
|
|
|
|
break;
|
|
|
|
|
2010-09-03 13:56:18 +04:00
|
|
|
/*
|
|
|
|
* 'one mirror IO has finished' event handler:
|
|
|
|
*/
|
|
|
|
r1_bio->bios[mirror] = NULL;
|
|
|
|
to_put = bio;
|
|
|
|
if (!uptodate) {
|
|
|
|
md_error(r1_bio->mddev, conf->mirrors[mirror].rdev);
|
|
|
|
/* an I/O failed, we can't clear the bitmap */
|
|
|
|
set_bit(R1BIO_Degraded, &r1_bio->state);
|
|
|
|
} else
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
2010-09-03 13:56:18 +04:00
|
|
|
* Set R1BIO_Uptodate in our master bio, so that we
|
|
|
|
* will return a good error code for to the higher
|
|
|
|
* levels even if IO on some other mirrored buffer
|
|
|
|
* fails.
|
|
|
|
*
|
|
|
|
* The 'master' represents the composite IO operation
|
|
|
|
* to user-side. So if something waits for IO, then it
|
|
|
|
* will wait for the 'master' bio.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2010-09-03 13:56:18 +04:00
|
|
|
set_bit(R1BIO_Uptodate, &r1_bio->state);
|
|
|
|
|
|
|
|
update_head_pos(mirror, r1_bio);
|
|
|
|
|
|
|
|
if (behind) {
|
|
|
|
if (test_bit(WriteMostly, &conf->mirrors[mirror].rdev->flags))
|
|
|
|
atomic_dec(&r1_bio->behind_remaining);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In behind mode, we ACK the master bio once the I/O
|
|
|
|
* has safely reached all non-writemostly
|
|
|
|
* disks. Setting the Returned bit ensures that this
|
|
|
|
* gets done only once -- we don't ever want to return
|
|
|
|
* -EIO here, instead we'll wait
|
|
|
|
*/
|
|
|
|
if (atomic_read(&r1_bio->behind_remaining) >= (atomic_read(&r1_bio->remaining)-1) &&
|
|
|
|
test_bit(R1BIO_Uptodate, &r1_bio->state)) {
|
|
|
|
/* Maybe we can return now */
|
|
|
|
if (!test_and_set_bit(R1BIO_Returned, &r1_bio->state)) {
|
|
|
|
struct bio *mbio = r1_bio->master_bio;
|
|
|
|
PRINTK(KERN_DEBUG "raid1: behind end write sectors %llu-%llu\n",
|
|
|
|
(unsigned long long) mbio->bi_sector,
|
|
|
|
(unsigned long long) mbio->bi_sector +
|
|
|
|
(mbio->bi_size >> 9) - 1);
|
|
|
|
bio_endio(mbio, 0);
|
2005-09-10 03:23:47 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2010-09-03 13:56:18 +04:00
|
|
|
rdev_dec_pending(conf->mirrors[mirror].rdev, conf->mddev);
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Let's see if all mirrored write operations have finished
|
|
|
|
* already.
|
|
|
|
*/
|
2011-05-11 08:51:19 +04:00
|
|
|
r1_bio_write_done(r1_bio);
|
2006-06-26 11:27:35 +04:00
|
|
|
|
2006-03-10 04:33:46 +03:00
|
|
|
if (to_put)
|
|
|
|
bio_put(to_put);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This routine returns the disk from which the requested read should
|
|
|
|
* be done. There is a per-array 'next expected sequential IO' sector
|
|
|
|
* number - if this matches on the next IO then we use the last disk.
|
|
|
|
* There is also a per-disk 'last know head position' sector that is
|
|
|
|
* maintained from IRQ contexts, both the normal and the resync IO
|
|
|
|
* completion handlers update this position correctly. If there is no
|
|
|
|
* perfect sequential match then we pick the disk whose head is closest.
|
|
|
|
*
|
|
|
|
* If there are 2 mirrors in the same 2 devices, performance degrades
|
|
|
|
* because position is mirror, not device based.
|
|
|
|
*
|
|
|
|
* The rdev for the device selected will have nr_pending incremented.
|
|
|
|
*/
|
|
|
|
static int read_balance(conf_t *conf, r1bio_t *r1_bio)
|
|
|
|
{
|
2010-05-08 02:20:17 +04:00
|
|
|
const sector_t this_sector = r1_bio->sector;
|
2005-04-17 02:20:36 +04:00
|
|
|
const int sectors = r1_bio->sectors;
|
2010-09-06 08:10:08 +04:00
|
|
|
int start_disk;
|
2011-05-11 08:34:56 +04:00
|
|
|
int best_disk;
|
2010-09-06 08:10:08 +04:00
|
|
|
int i;
|
2011-05-11 08:34:56 +04:00
|
|
|
sector_t best_dist;
|
2005-09-10 03:23:45 +04:00
|
|
|
mdk_rdev_t *rdev;
|
2010-09-06 08:10:08 +04:00
|
|
|
int choose_first;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
/*
|
2005-09-10 03:23:45 +04:00
|
|
|
* Check if we can balance. We can balance on the whole
|
2005-04-17 02:20:36 +04:00
|
|
|
* device if no resync is going on, or below the resync window.
|
|
|
|
* We take the first readable disk when above the resync window.
|
|
|
|
*/
|
|
|
|
retry:
|
2011-05-11 08:34:56 +04:00
|
|
|
best_disk = -1;
|
|
|
|
best_dist = MaxSector;
|
2005-04-17 02:20:36 +04:00
|
|
|
if (conf->mddev->recovery_cp < MaxSector &&
|
|
|
|
(this_sector + sectors >= conf->next_resync)) {
|
2010-09-06 08:10:08 +04:00
|
|
|
choose_first = 1;
|
|
|
|
start_disk = 0;
|
|
|
|
} else {
|
|
|
|
choose_first = 0;
|
|
|
|
start_disk = conf->last_used;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2010-09-06 08:10:08 +04:00
|
|
|
for (i = 0 ; i < conf->raid_disks ; i++) {
|
2011-05-11 08:34:56 +04:00
|
|
|
sector_t dist;
|
2010-09-06 08:10:08 +04:00
|
|
|
int disk = start_disk + i;
|
|
|
|
if (disk >= conf->raid_disks)
|
|
|
|
disk -= conf->raid_disks;
|
|
|
|
|
|
|
|
rdev = rcu_dereference(conf->mirrors[disk].rdev);
|
|
|
|
if (r1_bio->bios[disk] == IO_BLOCKED
|
|
|
|
|| rdev == NULL
|
2011-05-11 08:34:56 +04:00
|
|
|
|| test_bit(Faulty, &rdev->flags))
|
2010-09-06 08:10:08 +04:00
|
|
|
continue;
|
2011-05-11 08:34:56 +04:00
|
|
|
if (!test_bit(In_sync, &rdev->flags) &&
|
|
|
|
rdev->recovery_offset < this_sector + sectors)
|
2005-04-17 02:20:36 +04:00
|
|
|
continue;
|
2011-05-11 08:34:56 +04:00
|
|
|
if (test_bit(WriteMostly, &rdev->flags)) {
|
|
|
|
/* Don't balance among write-mostly, just
|
|
|
|
* use the first as a last resort */
|
|
|
|
if (best_disk < 0)
|
|
|
|
best_disk = disk;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* This is a reasonable device to use. It might
|
|
|
|
* even be best.
|
|
|
|
*/
|
|
|
|
dist = abs(this_sector - conf->mirrors[disk].head_position);
|
|
|
|
if (choose_first
|
|
|
|
/* Don't change to another disk for sequential reads */
|
|
|
|
|| conf->next_seq_sect == this_sector
|
|
|
|
|| dist == 0
|
|
|
|
/* If device is idle, use it */
|
|
|
|
|| atomic_read(&rdev->nr_pending) == 0) {
|
|
|
|
best_disk = disk;
|
2005-04-17 02:20:36 +04:00
|
|
|
break;
|
|
|
|
}
|
2011-05-11 08:34:56 +04:00
|
|
|
if (dist < best_dist) {
|
|
|
|
best_dist = dist;
|
|
|
|
best_disk = disk;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2010-09-06 08:10:08 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2011-05-11 08:34:56 +04:00
|
|
|
if (best_disk >= 0) {
|
|
|
|
rdev = rcu_dereference(conf->mirrors[best_disk].rdev);
|
2005-09-10 03:23:45 +04:00
|
|
|
if (!rdev)
|
|
|
|
goto retry;
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
2011-05-11 08:34:56 +04:00
|
|
|
if (test_bit(Faulty, &rdev->flags)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
/* cannot risk returning a device that failed
|
|
|
|
* before we inc'ed nr_pending
|
|
|
|
*/
|
2006-01-06 11:20:46 +03:00
|
|
|
rdev_dec_pending(rdev, conf->mddev);
|
2005-04-17 02:20:36 +04:00
|
|
|
goto retry;
|
|
|
|
}
|
2005-09-10 03:23:45 +04:00
|
|
|
conf->next_seq_sect = this_sector + sectors;
|
2011-05-11 08:34:56 +04:00
|
|
|
conf->last_used = best_disk;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
2011-05-11 08:34:56 +04:00
|
|
|
return best_disk;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2011-06-08 02:50:35 +04:00
|
|
|
int md_raid1_congested(mddev_t *mddev, int bits)
|
2006-10-03 12:15:54 +04:00
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2006-10-03 12:15:54 +04:00
|
|
|
int i, ret = 0;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
for (i = 0; i < mddev->raid_disks; i++) {
|
|
|
|
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
|
|
|
|
if (rdev && !test_bit(Faulty, &rdev->flags)) {
|
2007-07-24 11:28:11 +04:00
|
|
|
struct request_queue *q = bdev_get_queue(rdev->bdev);
|
2006-10-03 12:15:54 +04:00
|
|
|
|
2011-06-08 02:50:35 +04:00
|
|
|
BUG_ON(!q);
|
|
|
|
|
2006-10-03 12:15:54 +04:00
|
|
|
/* Note the '|| 1' - when read_balance prefers
|
|
|
|
* non-congested targets, it can be removed
|
|
|
|
*/
|
2009-04-07 01:35:56 +04:00
|
|
|
if ((bits & (1<<BDI_async_congested)) || 1)
|
2006-10-03 12:15:54 +04:00
|
|
|
ret |= bdi_congested(&q->backing_dev_info, bits);
|
|
|
|
else
|
|
|
|
ret &= bdi_congested(&q->backing_dev_info, bits);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
|
|
}
|
2011-06-08 02:50:35 +04:00
|
|
|
EXPORT_SYMBOL_GPL(md_raid1_congested);
|
2006-10-03 12:15:54 +04:00
|
|
|
|
2011-06-08 02:50:35 +04:00
|
|
|
static int raid1_congested(void *data, int bits)
|
|
|
|
{
|
|
|
|
mddev_t *mddev = data;
|
|
|
|
|
|
|
|
return mddev_congested(mddev, bits) ||
|
|
|
|
md_raid1_congested(mddev, bits);
|
|
|
|
}
|
2006-10-03 12:15:54 +04:00
|
|
|
|
2011-03-10 10:52:07 +03:00
|
|
|
static void flush_pending_writes(conf_t *conf)
|
2008-03-05 01:29:29 +03:00
|
|
|
{
|
|
|
|
/* Any writes that have been queued but are awaiting
|
|
|
|
* bitmap updates get flushed here.
|
|
|
|
*/
|
|
|
|
spin_lock_irq(&conf->device_lock);
|
|
|
|
|
|
|
|
if (conf->pending_bio_list.head) {
|
|
|
|
struct bio *bio;
|
|
|
|
bio = bio_list_get(&conf->pending_bio_list);
|
|
|
|
spin_unlock_irq(&conf->device_lock);
|
|
|
|
/* flush any pending bitmap writes to
|
|
|
|
* disk before proceeding w/ I/O */
|
|
|
|
bitmap_unplug(conf->mddev->bitmap);
|
|
|
|
|
|
|
|
while (bio) { /* submit pending writes */
|
|
|
|
struct bio *next = bio->bi_next;
|
|
|
|
bio->bi_next = NULL;
|
|
|
|
generic_make_request(bio);
|
|
|
|
bio = next;
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
spin_unlock_irq(&conf->device_lock);
|
2011-03-10 10:52:07 +03:00
|
|
|
}
|
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
/* Barriers....
|
|
|
|
* Sometimes we need to suspend IO while we do something else,
|
|
|
|
* either some resync/recovery, or reconfigure the array.
|
|
|
|
* To do this we raise a 'barrier'.
|
|
|
|
* The 'barrier' is a counter that can be raised multiple times
|
|
|
|
* to count how many activities are happening which preclude
|
|
|
|
* normal IO.
|
|
|
|
* We can only raise the barrier if there is no pending IO.
|
|
|
|
* i.e. if nr_pending == 0.
|
|
|
|
* We choose only to raise the barrier if no-one is waiting for the
|
|
|
|
* barrier to go down. This means that as soon as an IO request
|
|
|
|
* is ready, no other operations which require a barrier will start
|
|
|
|
* until the IO request has had a chance.
|
|
|
|
*
|
|
|
|
* So: regular IO calls 'wait_barrier'. When that returns there
|
|
|
|
* is no backgroup IO happening, It must arrange to call
|
|
|
|
* allow_barrier when it has finished its IO.
|
|
|
|
* backgroup IO calls must call raise_barrier. Once that returns
|
|
|
|
* there is no normal IO happeing. It must arrange to call
|
|
|
|
* lower_barrier when the particular background IO completes.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#define RESYNC_DEPTH 32
|
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
static void raise_barrier(conf_t *conf)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
spin_lock_irq(&conf->resync_lock);
|
2006-01-06 11:20:12 +03:00
|
|
|
|
|
|
|
/* Wait until no block IO is waiting */
|
|
|
|
wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,
|
2011-04-18 12:25:43 +04:00
|
|
|
conf->resync_lock, );
|
2006-01-06 11:20:12 +03:00
|
|
|
|
|
|
|
/* block any new IO from starting */
|
|
|
|
conf->barrier++;
|
|
|
|
|
2010-10-26 08:46:20 +04:00
|
|
|
/* Now wait for all pending IO to complete */
|
2006-01-06 11:20:12 +03:00
|
|
|
wait_event_lock_irq(conf->wait_barrier,
|
|
|
|
!conf->nr_pending && conf->barrier < RESYNC_DEPTH,
|
2011-04-18 12:25:43 +04:00
|
|
|
conf->resync_lock, );
|
2006-01-06 11:20:12 +03:00
|
|
|
|
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void lower_barrier(conf_t *conf)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
2009-12-14 04:49:51 +03:00
|
|
|
BUG_ON(conf->barrier <= 0);
|
2006-01-06 11:20:12 +03:00
|
|
|
spin_lock_irqsave(&conf->resync_lock, flags);
|
|
|
|
conf->barrier--;
|
|
|
|
spin_unlock_irqrestore(&conf->resync_lock, flags);
|
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void wait_barrier(conf_t *conf)
|
|
|
|
{
|
|
|
|
spin_lock_irq(&conf->resync_lock);
|
|
|
|
if (conf->barrier) {
|
|
|
|
conf->nr_waiting++;
|
|
|
|
wait_event_lock_irq(conf->wait_barrier, !conf->barrier,
|
|
|
|
conf->resync_lock,
|
2011-04-18 12:25:43 +04:00
|
|
|
);
|
2006-01-06 11:20:12 +03:00
|
|
|
conf->nr_waiting--;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-01-06 11:20:12 +03:00
|
|
|
conf->nr_pending++;
|
2005-04-17 02:20:36 +04:00
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
|
|
|
}
|
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
static void allow_barrier(conf_t *conf)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&conf->resync_lock, flags);
|
|
|
|
conf->nr_pending--;
|
|
|
|
spin_unlock_irqrestore(&conf->resync_lock, flags);
|
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
}
|
|
|
|
|
2006-01-06 11:20:19 +03:00
|
|
|
static void freeze_array(conf_t *conf)
|
|
|
|
{
|
|
|
|
/* stop syncio and normal IO and wait for everything to
|
|
|
|
* go quite.
|
|
|
|
* We increment barrier and nr_waiting, and then
|
2008-03-05 01:29:35 +03:00
|
|
|
* wait until nr_pending match nr_queued+1
|
|
|
|
* This is called in the context of one normal IO request
|
|
|
|
* that has failed. Thus any sync request that might be pending
|
|
|
|
* will be blocked by nr_pending, and we need to wait for
|
|
|
|
* pending IO requests to complete or be queued for re-try.
|
|
|
|
* Thus the number queued (nr_queued) plus this request (1)
|
|
|
|
* must match the number of pending IOs (nr_pending) before
|
|
|
|
* we continue.
|
2006-01-06 11:20:19 +03:00
|
|
|
*/
|
|
|
|
spin_lock_irq(&conf->resync_lock);
|
|
|
|
conf->barrier++;
|
|
|
|
conf->nr_waiting++;
|
|
|
|
wait_event_lock_irq(conf->wait_barrier,
|
2008-03-05 01:29:35 +03:00
|
|
|
conf->nr_pending == conf->nr_queued+1,
|
2006-01-06 11:20:19 +03:00
|
|
|
conf->resync_lock,
|
2011-04-18 12:25:43 +04:00
|
|
|
flush_pending_writes(conf));
|
2006-01-06 11:20:19 +03:00
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
|
|
|
}
|
|
|
|
static void unfreeze_array(conf_t *conf)
|
|
|
|
{
|
|
|
|
/* reverse the effect of the freeze */
|
|
|
|
spin_lock_irq(&conf->resync_lock);
|
|
|
|
conf->barrier--;
|
|
|
|
conf->nr_waiting--;
|
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
|
|
|
}
|
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
|
2010-10-19 05:54:01 +04:00
|
|
|
/* duplicate the data pages for behind I/O
|
|
|
|
*/
|
2011-05-11 08:51:19 +04:00
|
|
|
static void alloc_behind_pages(struct bio *bio, r1bio_t *r1_bio)
|
2005-09-10 03:23:47 +04:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct bio_vec *bvec;
|
2011-05-11 08:51:19 +04:00
|
|
|
struct page **pages = kzalloc(bio->bi_vcnt * sizeof(struct page*),
|
2005-09-10 03:23:47 +04:00
|
|
|
GFP_NOIO);
|
|
|
|
if (unlikely(!pages))
|
2011-05-11 08:51:19 +04:00
|
|
|
return;
|
2005-09-10 03:23:47 +04:00
|
|
|
|
|
|
|
bio_for_each_segment(bvec, bio, i) {
|
2011-05-11 08:51:19 +04:00
|
|
|
pages[i] = alloc_page(GFP_NOIO);
|
|
|
|
if (unlikely(!pages[i]))
|
2005-09-10 03:23:47 +04:00
|
|
|
goto do_sync_io;
|
2011-05-11 08:51:19 +04:00
|
|
|
memcpy(kmap(pages[i]) + bvec->bv_offset,
|
2005-09-10 03:23:47 +04:00
|
|
|
kmap(bvec->bv_page) + bvec->bv_offset, bvec->bv_len);
|
2011-05-11 08:51:19 +04:00
|
|
|
kunmap(pages[i]);
|
2005-09-10 03:23:47 +04:00
|
|
|
kunmap(bvec->bv_page);
|
|
|
|
}
|
2011-05-11 08:51:19 +04:00
|
|
|
r1_bio->behind_pages = pages;
|
|
|
|
r1_bio->behind_page_count = bio->bi_vcnt;
|
|
|
|
set_bit(R1BIO_BehindIO, &r1_bio->state);
|
|
|
|
return;
|
2005-09-10 03:23:47 +04:00
|
|
|
|
|
|
|
do_sync_io:
|
2011-05-11 08:51:19 +04:00
|
|
|
for (i = 0; i < bio->bi_vcnt; i++)
|
|
|
|
if (pages[i])
|
|
|
|
put_page(pages[i]);
|
2005-09-10 03:23:47 +04:00
|
|
|
kfree(pages);
|
|
|
|
PRINTK("%dB behind alloc failed, doing sync I/O\n", bio->bi_size);
|
|
|
|
}
|
|
|
|
|
2010-04-01 08:02:13 +04:00
|
|
|
static int make_request(mddev_t *mddev, struct bio * bio)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
mirror_info_t *mirror;
|
|
|
|
r1bio_t *r1_bio;
|
|
|
|
struct bio *read_bio;
|
2005-06-22 04:17:23 +04:00
|
|
|
int i, targets = 0, disks;
|
2008-05-24 00:04:32 +04:00
|
|
|
struct bitmap *bitmap;
|
2005-06-22 04:17:23 +04:00
|
|
|
unsigned long flags;
|
2005-11-01 11:26:16 +03:00
|
|
|
const int rw = bio_data_dir(bio);
|
2010-08-18 10:16:05 +04:00
|
|
|
const unsigned long do_sync = (bio->bi_rw & REQ_SYNC);
|
2010-09-03 13:56:18 +04:00
|
|
|
const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA));
|
2008-04-30 11:52:32 +04:00
|
|
|
mdk_rdev_t *blocked_rdev;
|
2011-04-18 12:25:43 +04:00
|
|
|
int plugged;
|
2005-06-22 04:17:23 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Register the new request and wait if the reconstruction
|
|
|
|
* thread has put up a bar for new requests.
|
|
|
|
* Continue immediately if no resync is active currently.
|
|
|
|
*/
|
2006-05-01 23:15:47 +04:00
|
|
|
|
2005-06-22 04:17:26 +04:00
|
|
|
md_write_start(mddev, bio); /* wait on superblock update early */
|
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
if (bio_data_dir(bio) == WRITE &&
|
|
|
|
bio->bi_sector + bio->bi_size/512 > mddev->suspend_lo &&
|
|
|
|
bio->bi_sector < mddev->suspend_hi) {
|
|
|
|
/* As the suspend_* range is controlled by
|
|
|
|
* userspace, we want an interruptible
|
|
|
|
* wait.
|
|
|
|
*/
|
|
|
|
DEFINE_WAIT(w);
|
|
|
|
for (;;) {
|
|
|
|
flush_signals(current);
|
|
|
|
prepare_to_wait(&conf->wait_barrier,
|
|
|
|
&w, TASK_INTERRUPTIBLE);
|
|
|
|
if (bio->bi_sector + bio->bi_size/512 <= mddev->suspend_lo ||
|
|
|
|
bio->bi_sector >= mddev->suspend_hi)
|
|
|
|
break;
|
|
|
|
schedule();
|
|
|
|
}
|
|
|
|
finish_wait(&conf->wait_barrier, &w);
|
|
|
|
}
|
2006-05-01 23:15:47 +04:00
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
wait_barrier(conf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2008-05-24 00:04:32 +04:00
|
|
|
bitmap = mddev->bitmap;
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* make_request() can abort the operation when READA is being
|
|
|
|
* used and no empty request is available.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
|
|
|
|
|
|
|
|
r1_bio->master_bio = bio;
|
|
|
|
r1_bio->sectors = bio->bi_size >> 9;
|
2005-06-22 04:17:23 +04:00
|
|
|
r1_bio->state = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
r1_bio->mddev = mddev;
|
|
|
|
r1_bio->sector = bio->bi_sector;
|
|
|
|
|
2005-11-01 11:26:16 +03:00
|
|
|
if (rw == READ) {
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* read balancing logic:
|
|
|
|
*/
|
|
|
|
int rdisk = read_balance(conf, r1_bio);
|
|
|
|
|
|
|
|
if (rdisk < 0) {
|
|
|
|
/* couldn't find anywhere to read from */
|
|
|
|
raid_end_bio_io(r1_bio);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
mirror = conf->mirrors + rdisk;
|
|
|
|
|
2010-03-31 04:21:44 +04:00
|
|
|
if (test_bit(WriteMostly, &mirror->rdev->flags) &&
|
|
|
|
bitmap) {
|
|
|
|
/* Reading from a write-mostly device must
|
|
|
|
* take care not to over-take any writes
|
|
|
|
* that are 'behind'
|
|
|
|
*/
|
|
|
|
wait_event(bitmap->behind_wait,
|
|
|
|
atomic_read(&bitmap->behind_writes) == 0);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
r1_bio->read_disk = rdisk;
|
|
|
|
|
2010-10-26 11:31:13 +04:00
|
|
|
read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
r1_bio->bios[rdisk] = read_bio;
|
|
|
|
|
|
|
|
read_bio->bi_sector = r1_bio->sector + mirror->rdev->data_offset;
|
|
|
|
read_bio->bi_bdev = mirror->rdev->bdev;
|
|
|
|
read_bio->bi_end_io = raid1_end_read_request;
|
2010-08-07 20:20:39 +04:00
|
|
|
read_bio->bi_rw = READ | do_sync;
|
2005-04-17 02:20:36 +04:00
|
|
|
read_bio->bi_private = r1_bio;
|
|
|
|
|
|
|
|
generic_make_request(read_bio);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* WRITE:
|
|
|
|
*/
|
|
|
|
/* first select target devices under spinlock and
|
|
|
|
* inc refcount on their rdev. Record them by setting
|
|
|
|
* bios[x] to bio
|
|
|
|
*/
|
2011-04-18 12:25:43 +04:00
|
|
|
plugged = mddev_check_plugged(mddev);
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
disks = conf->raid_disks;
|
2008-04-30 11:52:32 +04:00
|
|
|
retry_write:
|
|
|
|
blocked_rdev = NULL;
|
2005-04-17 02:20:36 +04:00
|
|
|
rcu_read_lock();
|
|
|
|
for (i = 0; i < disks; i++) {
|
2008-04-30 11:52:32 +04:00
|
|
|
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
|
|
|
|
if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
|
|
|
blocked_rdev = rdev;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (rdev && !test_bit(Faulty, &rdev->flags)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
atomic_inc(&rdev->nr_pending);
|
2005-11-09 08:39:31 +03:00
|
|
|
if (test_bit(Faulty, &rdev->flags)) {
|
2006-01-06 11:20:46 +03:00
|
|
|
rdev_dec_pending(rdev, mddev);
|
2005-04-17 02:20:36 +04:00
|
|
|
r1_bio->bios[i] = NULL;
|
2010-05-18 09:27:13 +04:00
|
|
|
} else {
|
2005-04-17 02:20:36 +04:00
|
|
|
r1_bio->bios[i] = bio;
|
2010-05-18 09:27:13 +04:00
|
|
|
targets++;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
} else
|
|
|
|
r1_bio->bios[i] = NULL;
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
2008-04-30 11:52:32 +04:00
|
|
|
if (unlikely(blocked_rdev)) {
|
|
|
|
/* Wait for this device to become unblocked */
|
|
|
|
int j;
|
|
|
|
|
|
|
|
for (j = 0; j < i; j++)
|
|
|
|
if (r1_bio->bios[j])
|
|
|
|
rdev_dec_pending(conf->mirrors[j].rdev, mddev);
|
|
|
|
|
|
|
|
allow_barrier(conf);
|
|
|
|
md_wait_for_blocked_rdev(blocked_rdev, mddev);
|
|
|
|
wait_barrier(conf);
|
|
|
|
goto retry_write;
|
|
|
|
}
|
|
|
|
|
2005-09-10 03:23:47 +04:00
|
|
|
BUG_ON(targets == 0); /* we never fail the last device */
|
|
|
|
|
2005-06-22 04:17:23 +04:00
|
|
|
if (targets < conf->raid_disks) {
|
|
|
|
/* array is degraded, we will not clear the bitmap
|
|
|
|
* on I/O completion (see raid1_end_write_request) */
|
|
|
|
set_bit(R1BIO_Degraded, &r1_bio->state);
|
|
|
|
}
|
|
|
|
|
2010-03-31 04:21:44 +04:00
|
|
|
/* do behind I/O ?
|
|
|
|
* Not if there are too many, or cannot allocate memory,
|
|
|
|
* or a reader on WriteMostly is waiting for behind writes
|
|
|
|
* to flush */
|
2005-09-10 03:23:47 +04:00
|
|
|
if (bitmap &&
|
2009-12-14 04:49:53 +03:00
|
|
|
(atomic_read(&bitmap->behind_writes)
|
|
|
|
< mddev->bitmap_info.max_write_behind) &&
|
2011-05-11 08:51:19 +04:00
|
|
|
!waitqueue_active(&bitmap->behind_wait))
|
|
|
|
alloc_behind_pages(bio, r1_bio);
|
2005-09-10 03:23:47 +04:00
|
|
|
|
2010-10-19 05:54:01 +04:00
|
|
|
atomic_set(&r1_bio->remaining, 1);
|
2005-09-10 03:23:47 +04:00
|
|
|
atomic_set(&r1_bio->behind_remaining, 0);
|
2005-06-22 04:17:12 +04:00
|
|
|
|
2010-10-19 05:54:01 +04:00
|
|
|
bitmap_startwrite(bitmap, bio->bi_sector, r1_bio->sectors,
|
|
|
|
test_bit(R1BIO_BehindIO, &r1_bio->state));
|
2005-04-17 02:20:36 +04:00
|
|
|
for (i = 0; i < disks; i++) {
|
|
|
|
struct bio *mbio;
|
|
|
|
if (!r1_bio->bios[i])
|
|
|
|
continue;
|
|
|
|
|
2010-10-26 11:31:13 +04:00
|
|
|
mbio = bio_clone_mddev(bio, GFP_NOIO, mddev);
|
2005-04-17 02:20:36 +04:00
|
|
|
r1_bio->bios[i] = mbio;
|
|
|
|
|
|
|
|
mbio->bi_sector = r1_bio->sector + conf->mirrors[i].rdev->data_offset;
|
|
|
|
mbio->bi_bdev = conf->mirrors[i].rdev->bdev;
|
|
|
|
mbio->bi_end_io = raid1_end_write_request;
|
2010-09-03 13:56:18 +04:00
|
|
|
mbio->bi_rw = WRITE | do_flush_fua | do_sync;
|
2005-04-17 02:20:36 +04:00
|
|
|
mbio->bi_private = r1_bio;
|
|
|
|
|
2011-05-11 08:51:19 +04:00
|
|
|
if (r1_bio->behind_pages) {
|
2005-09-10 03:23:47 +04:00
|
|
|
struct bio_vec *bvec;
|
|
|
|
int j;
|
|
|
|
|
|
|
|
/* Yes, I really want the '__' version so that
|
|
|
|
* we clear any unused pointer in the io_vec, rather
|
|
|
|
* than leave them unchanged. This is important
|
|
|
|
* because when we come to free the pages, we won't
|
2010-10-26 08:46:20 +04:00
|
|
|
* know the original bi_idx, so we just free
|
2005-09-10 03:23:47 +04:00
|
|
|
* them all
|
|
|
|
*/
|
|
|
|
__bio_for_each_segment(bvec, mbio, j, 0)
|
2011-05-11 08:51:19 +04:00
|
|
|
bvec->bv_page = r1_bio->behind_pages[j];
|
2005-09-10 03:23:47 +04:00
|
|
|
if (test_bit(WriteMostly, &conf->mirrors[i].rdev->flags))
|
|
|
|
atomic_inc(&r1_bio->behind_remaining);
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
atomic_inc(&r1_bio->remaining);
|
2010-10-19 05:54:01 +04:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
bio_list_add(&conf->pending_bio_list, mbio);
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2011-05-11 08:51:19 +04:00
|
|
|
r1_bio_write_done(r1_bio);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-10-19 05:54:01 +04:00
|
|
|
/* In case raid1d snuck in to freeze_array */
|
2008-03-05 01:29:29 +03:00
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
|
2011-04-18 12:25:43 +04:00
|
|
|
if (do_sync || !bitmap || !plugged)
|
2007-01-11 10:15:37 +03:00
|
|
|
md_wakeup_thread(mddev->thread);
|
2005-06-22 04:17:23 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void status(struct seq_file *seq, mddev_t *mddev)
|
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
int i;
|
|
|
|
|
|
|
|
seq_printf(seq, " [%d/%d] [", conf->raid_disks,
|
2006-10-03 12:15:52 +04:00
|
|
|
conf->raid_disks - mddev->degraded);
|
2006-09-01 08:27:36 +04:00
|
|
|
rcu_read_lock();
|
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
|
|
|
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
|
2005-04-17 02:20:36 +04:00
|
|
|
seq_printf(seq, "%s",
|
2006-09-01 08:27:36 +04:00
|
|
|
rdev && test_bit(In_sync, &rdev->flags) ? "U" : "_");
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
seq_printf(seq, "]");
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void error(mddev_t *mddev, mdk_rdev_t *rdev)
|
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE];
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If it is not operational, then we have already marked it as dead
|
|
|
|
* else if it is the last working disks, ignore the error, let the
|
|
|
|
* next level up know.
|
|
|
|
* else mark the drive as failed
|
|
|
|
*/
|
2005-11-09 08:39:31 +03:00
|
|
|
if (test_bit(In_sync, &rdev->flags)
|
2009-01-09 00:31:11 +03:00
|
|
|
&& (conf->raid_disks - mddev->degraded) == 1) {
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Don't fail the drive, act as though we were just a
|
2009-01-09 00:31:11 +03:00
|
|
|
* normal single drive.
|
|
|
|
* However don't try a recovery from this drive as
|
|
|
|
* it is very likely to fail.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2011-07-27 05:00:36 +04:00
|
|
|
conf->recovery_disabled = mddev->recovery_disabled;
|
2005-04-17 02:20:36 +04:00
|
|
|
return;
|
2009-01-09 00:31:11 +03:00
|
|
|
}
|
2006-10-03 12:15:53 +04:00
|
|
|
if (test_and_clear_bit(In_sync, &rdev->flags)) {
|
|
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
2005-04-17 02:20:36 +04:00
|
|
|
mddev->degraded++;
|
2007-05-10 14:15:50 +04:00
|
|
|
set_bit(Faulty, &rdev->flags);
|
2006-10-03 12:15:53 +04:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* if recovery is running, make sure it aborts.
|
|
|
|
*/
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 00:04:39 +04:00
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2007-05-10 14:15:50 +04:00
|
|
|
} else
|
|
|
|
set_bit(Faulty, &rdev->flags);
|
2006-10-03 12:15:46 +04:00
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
2011-01-14 01:14:33 +03:00
|
|
|
printk(KERN_ALERT
|
|
|
|
"md/raid1:%s: Disk failure on %s, disabling device.\n"
|
|
|
|
"md/raid1:%s: Operation continuing on %d devices.\n",
|
2010-05-03 08:30:35 +04:00
|
|
|
mdname(mddev), bdevname(rdev->bdev, b),
|
|
|
|
mdname(mddev), conf->raid_disks - mddev->degraded);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void print_conf(conf_t *conf)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_DEBUG "RAID1 conf printout:\n");
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!conf) {
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_DEBUG "(!conf)\n");
|
2005-04-17 02:20:36 +04:00
|
|
|
return;
|
|
|
|
}
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_DEBUG " --- wd:%d rd:%d\n", conf->raid_disks - conf->mddev->degraded,
|
2005-04-17 02:20:36 +04:00
|
|
|
conf->raid_disks);
|
|
|
|
|
2006-09-01 08:27:36 +04:00
|
|
|
rcu_read_lock();
|
2005-04-17 02:20:36 +04:00
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
|
|
|
char b[BDEVNAME_SIZE];
|
2006-09-01 08:27:36 +04:00
|
|
|
mdk_rdev_t *rdev = rcu_dereference(conf->mirrors[i].rdev);
|
|
|
|
if (rdev)
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_DEBUG " disk %d, wo:%d, o:%d, dev:%s\n",
|
2006-09-01 08:27:36 +04:00
|
|
|
i, !test_bit(In_sync, &rdev->flags),
|
|
|
|
!test_bit(Faulty, &rdev->flags),
|
|
|
|
bdevname(rdev->bdev,b));
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2006-09-01 08:27:36 +04:00
|
|
|
rcu_read_unlock();
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void close_sync(conf_t *conf)
|
|
|
|
{
|
2006-01-06 11:20:12 +03:00
|
|
|
wait_barrier(conf);
|
|
|
|
allow_barrier(conf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
mempool_destroy(conf->r1buf_pool);
|
|
|
|
conf->r1buf_pool = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int raid1_spare_active(mddev_t *mddev)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
conf_t *conf = mddev->private;
|
2010-08-18 05:56:59 +04:00
|
|
|
int count = 0;
|
|
|
|
unsigned long flags;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Find all failed disks within the RAID1 configuration
|
2006-09-01 08:27:36 +04:00
|
|
|
* and mark them readable.
|
|
|
|
* Called under mddev lock, so rcu protection not needed.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
2006-09-01 08:27:36 +04:00
|
|
|
mdk_rdev_t *rdev = conf->mirrors[i].rdev;
|
|
|
|
if (rdev
|
|
|
|
&& !test_bit(Faulty, &rdev->flags)
|
2006-10-03 12:15:53 +04:00
|
|
|
&& !test_and_set_bit(In_sync, &rdev->flags)) {
|
2010-08-18 05:56:59 +04:00
|
|
|
count++;
|
2011-07-27 05:00:36 +04:00
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
}
|
2010-08-18 05:56:59 +04:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
mddev->degraded -= count;
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
print_conf(conf);
|
2010-08-18 05:56:59 +04:00
|
|
|
return count;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int raid1_add_disk(mddev_t *mddev, mdk_rdev_t *rdev)
|
|
|
|
{
|
|
|
|
conf_t *conf = mddev->private;
|
2008-06-28 02:31:33 +04:00
|
|
|
int err = -EEXIST;
|
2005-06-22 04:17:25 +04:00
|
|
|
int mirror = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
mirror_info_t *p;
|
2008-06-28 02:31:31 +04:00
|
|
|
int first = 0;
|
|
|
|
int last = mddev->raid_disks - 1;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2011-07-27 05:00:36 +04:00
|
|
|
if (mddev->recovery_disabled == conf->recovery_disabled)
|
|
|
|
return -EBUSY;
|
|
|
|
|
2011-07-28 05:31:47 +04:00
|
|
|
if (rdev->badblocks.count)
|
|
|
|
return -EINVAL;
|
|
|
|
|
2008-06-28 02:31:31 +04:00
|
|
|
if (rdev->raid_disk >= 0)
|
|
|
|
first = last = rdev->raid_disk;
|
|
|
|
|
|
|
|
for (mirror = first; mirror <= last; mirror++)
|
2005-04-17 02:20:36 +04:00
|
|
|
if ( !(p=conf->mirrors+mirror)->rdev) {
|
|
|
|
|
2009-07-01 05:13:45 +04:00
|
|
|
disk_stack_limits(mddev->gendisk, rdev->bdev,
|
|
|
|
rdev->data_offset << 9);
|
2010-03-08 08:44:38 +03:00
|
|
|
/* as we don't honour merge_bvec_fn, we must
|
|
|
|
* never risk violating it, so limit
|
|
|
|
* ->max_segments to one lying with a single
|
|
|
|
* page, as a one page request is never in
|
|
|
|
* violation.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2010-03-08 08:44:38 +03:00
|
|
|
if (rdev->bdev->bd_disk->queue->merge_bvec_fn) {
|
|
|
|
blk_queue_max_segments(mddev->queue, 1);
|
|
|
|
blk_queue_segment_boundary(mddev->queue,
|
|
|
|
PAGE_CACHE_SIZE - 1);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
p->head_position = 0;
|
|
|
|
rdev->raid_disk = mirror;
|
2008-06-28 02:31:33 +04:00
|
|
|
err = 0;
|
2005-11-29 00:44:13 +03:00
|
|
|
/* As all devices are equivalent, we don't need a full recovery
|
|
|
|
* if this was recently any drive of the array
|
|
|
|
*/
|
|
|
|
if (rdev->saved_raid_disk < 0)
|
2005-06-22 04:17:25 +04:00
|
|
|
conf->fullsync = 1;
|
2005-11-09 08:39:27 +03:00
|
|
|
rcu_assign_pointer(p->rdev, rdev);
|
2005-04-17 02:20:36 +04:00
|
|
|
break;
|
|
|
|
}
|
2009-08-03 04:59:47 +04:00
|
|
|
md_integrity_add_rdev(rdev, mddev);
|
2005-04-17 02:20:36 +04:00
|
|
|
print_conf(conf);
|
2008-06-28 02:31:33 +04:00
|
|
|
return err;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int raid1_remove_disk(mddev_t *mddev, int number)
|
|
|
|
{
|
|
|
|
conf_t *conf = mddev->private;
|
|
|
|
int err = 0;
|
|
|
|
mdk_rdev_t *rdev;
|
|
|
|
mirror_info_t *p = conf->mirrors+ number;
|
|
|
|
|
|
|
|
print_conf(conf);
|
|
|
|
rdev = p->rdev;
|
|
|
|
if (rdev) {
|
2005-11-09 08:39:31 +03:00
|
|
|
if (test_bit(In_sync, &rdev->flags) ||
|
2005-04-17 02:20:36 +04:00
|
|
|
atomic_read(&rdev->nr_pending)) {
|
|
|
|
err = -EBUSY;
|
|
|
|
goto abort;
|
|
|
|
}
|
2010-10-26 08:46:20 +04:00
|
|
|
/* Only remove non-faulty devices if recovery
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 00:04:39 +04:00
|
|
|
* is not possible.
|
|
|
|
*/
|
|
|
|
if (!test_bit(Faulty, &rdev->flags) &&
|
2011-07-27 05:00:36 +04:00
|
|
|
mddev->recovery_disabled != conf->recovery_disabled &&
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 00:04:39 +04:00
|
|
|
mddev->degraded < conf->raid_disks) {
|
|
|
|
err = -EBUSY;
|
|
|
|
goto abort;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
p->rdev = NULL;
|
2005-05-01 19:59:04 +04:00
|
|
|
synchronize_rcu();
|
2005-04-17 02:20:36 +04:00
|
|
|
if (atomic_read(&rdev->nr_pending)) {
|
|
|
|
/* lost the race, try later */
|
|
|
|
err = -EBUSY;
|
|
|
|
p->rdev = rdev;
|
2009-08-03 04:59:47 +04:00
|
|
|
goto abort;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2011-03-17 13:11:05 +03:00
|
|
|
err = md_integrity_register(mddev);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
abort:
|
|
|
|
|
|
|
|
print_conf(conf);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2007-09-27 14:47:43 +04:00
|
|
|
static void end_sync_read(struct bio *bio, int error)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2010-03-08 08:02:40 +03:00
|
|
|
r1bio_t *r1_bio = bio->bi_private;
|
2006-01-06 11:20:26 +03:00
|
|
|
int i;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-01-06 11:20:26 +03:00
|
|
|
for (i=r1_bio->mddev->raid_disks; i--; )
|
|
|
|
if (r1_bio->bios[i] == bio)
|
|
|
|
break;
|
|
|
|
BUG_ON(i < 0);
|
|
|
|
update_head_pos(i, r1_bio);
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* we have read a block, now it needs to be re-written,
|
|
|
|
* or re-read if the read failed.
|
|
|
|
* We don't do much here, just schedule handling by raid1d
|
|
|
|
*/
|
2006-01-06 11:20:22 +03:00
|
|
|
if (test_bit(BIO_UPTODATE, &bio->bi_flags))
|
2005-04-17 02:20:36 +04:00
|
|
|
set_bit(R1BIO_Uptodate, &r1_bio->state);
|
2006-01-06 11:20:26 +03:00
|
|
|
|
|
|
|
if (atomic_dec_and_test(&r1_bio->remaining))
|
|
|
|
reschedule_retry(r1_bio);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
2007-09-27 14:47:43 +04:00
|
|
|
static void end_sync_write(struct bio *bio, int error)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
|
2010-03-08 08:02:40 +03:00
|
|
|
r1bio_t *r1_bio = bio->bi_private;
|
2005-04-17 02:20:36 +04:00
|
|
|
mddev_t *mddev = r1_bio->mddev;
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
int i;
|
|
|
|
int mirror=0;
|
|
|
|
|
|
|
|
for (i = 0; i < conf->raid_disks; i++)
|
|
|
|
if (r1_bio->bios[i] == bio) {
|
|
|
|
mirror = i;
|
|
|
|
break;
|
|
|
|
}
|
2006-03-31 14:31:57 +04:00
|
|
|
if (!uptodate) {
|
2010-10-19 03:03:39 +04:00
|
|
|
sector_t sync_blocks = 0;
|
2006-03-31 14:31:57 +04:00
|
|
|
sector_t s = r1_bio->sector;
|
|
|
|
long sectors_to_go = r1_bio->sectors;
|
|
|
|
/* make sure these bits doesn't get cleared. */
|
|
|
|
do {
|
2006-07-10 15:44:18 +04:00
|
|
|
bitmap_end_sync(mddev->bitmap, s,
|
2006-03-31 14:31:57 +04:00
|
|
|
&sync_blocks, 1);
|
|
|
|
s += sync_blocks;
|
|
|
|
sectors_to_go -= sync_blocks;
|
|
|
|
} while (sectors_to_go > 0);
|
2005-04-17 02:20:36 +04:00
|
|
|
md_error(mddev, conf->mirrors[mirror].rdev);
|
2006-03-31 14:31:57 +04:00
|
|
|
}
|
2005-08-04 23:53:34 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
update_head_pos(mirror, r1_bio);
|
|
|
|
|
|
|
|
if (atomic_dec_and_test(&r1_bio->remaining)) {
|
2009-02-25 05:18:47 +03:00
|
|
|
sector_t s = r1_bio->sectors;
|
2005-04-17 02:20:36 +04:00
|
|
|
put_buf(r1_bio);
|
2009-02-25 05:18:47 +03:00
|
|
|
md_done_sync(mddev, s, uptodate);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-05-11 08:40:44 +04:00
|
|
|
static int fix_sync_read_error(r1bio_t *r1_bio)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2011-05-11 08:40:44 +04:00
|
|
|
/* Try some synchronous reads of other devices to get
|
|
|
|
* good data, much like with normal read errors. Only
|
|
|
|
* read into the pages we already have so we don't
|
|
|
|
* need to re-issue the read request.
|
|
|
|
* We don't need to freeze the array, because being in an
|
|
|
|
* active sync request, there is no normal IO, and
|
|
|
|
* no overlapping syncs.
|
|
|
|
*/
|
|
|
|
mddev_t *mddev = r1_bio->mddev;
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2011-05-11 08:40:44 +04:00
|
|
|
struct bio *bio = r1_bio->bios[r1_bio->read_disk];
|
|
|
|
sector_t sect = r1_bio->sector;
|
|
|
|
int sectors = r1_bio->sectors;
|
|
|
|
int idx = 0;
|
|
|
|
|
|
|
|
while(sectors) {
|
|
|
|
int s = sectors;
|
|
|
|
int d = r1_bio->read_disk;
|
|
|
|
int success = 0;
|
|
|
|
mdk_rdev_t *rdev;
|
2011-05-11 08:48:56 +04:00
|
|
|
int start;
|
2011-05-11 08:40:44 +04:00
|
|
|
|
|
|
|
if (s > (PAGE_SIZE>>9))
|
|
|
|
s = PAGE_SIZE >> 9;
|
|
|
|
do {
|
|
|
|
if (r1_bio->bios[d]->bi_end_io == end_sync_read) {
|
|
|
|
/* No rcu protection needed here devices
|
|
|
|
* can only be removed when no resync is
|
|
|
|
* active, and resync is currently active
|
|
|
|
*/
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
2011-07-27 05:00:36 +04:00
|
|
|
if (sync_page_io(rdev, sect, s<<9,
|
2011-05-11 08:40:44 +04:00
|
|
|
bio->bi_io_vec[idx].bv_page,
|
|
|
|
READ, false)) {
|
|
|
|
success = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
d++;
|
|
|
|
if (d == conf->raid_disks)
|
|
|
|
d = 0;
|
|
|
|
} while (!success && d != r1_bio->read_disk);
|
|
|
|
|
2011-05-11 08:48:56 +04:00
|
|
|
if (!success) {
|
2011-05-11 08:40:44 +04:00
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
/* Cannot read from anywhere, array is toast */
|
|
|
|
md_error(mddev, conf->mirrors[r1_bio->read_disk].rdev);
|
|
|
|
printk(KERN_ALERT "md/raid1:%s: %s: unrecoverable I/O read error"
|
|
|
|
" for block %llu\n",
|
|
|
|
mdname(mddev),
|
|
|
|
bdevname(bio->bi_bdev, b),
|
|
|
|
(unsigned long long)r1_bio->sector);
|
|
|
|
md_done_sync(mddev, r1_bio->sectors, 0);
|
2006-01-06 11:20:26 +03:00
|
|
|
put_buf(r1_bio);
|
2011-05-11 08:40:44 +04:00
|
|
|
return 0;
|
2006-01-06 11:20:26 +03:00
|
|
|
}
|
2011-05-11 08:48:56 +04:00
|
|
|
|
|
|
|
start = d;
|
|
|
|
/* write it back and re-read */
|
|
|
|
while (d != r1_bio->read_disk) {
|
|
|
|
if (d == 0)
|
|
|
|
d = conf->raid_disks;
|
|
|
|
d--;
|
|
|
|
if (r1_bio->bios[d]->bi_end_io != end_sync_read)
|
|
|
|
continue;
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
2011-07-27 05:00:36 +04:00
|
|
|
if (sync_page_io(rdev, sect, s<<9,
|
2011-05-11 08:48:56 +04:00
|
|
|
bio->bi_io_vec[idx].bv_page,
|
|
|
|
WRITE, false) == 0) {
|
|
|
|
r1_bio->bios[d]->bi_end_io = NULL;
|
|
|
|
rdev_dec_pending(rdev, mddev);
|
|
|
|
md_error(mddev, rdev);
|
2011-07-27 05:00:36 +04:00
|
|
|
}
|
2011-05-11 08:48:56 +04:00
|
|
|
}
|
|
|
|
d = start;
|
|
|
|
while (d != r1_bio->read_disk) {
|
|
|
|
if (d == 0)
|
|
|
|
d = conf->raid_disks;
|
|
|
|
d--;
|
|
|
|
if (r1_bio->bios[d]->bi_end_io != end_sync_read)
|
|
|
|
continue;
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
2011-07-27 05:00:36 +04:00
|
|
|
if (sync_page_io(rdev, sect, s<<9,
|
2011-05-11 08:48:56 +04:00
|
|
|
bio->bi_io_vec[idx].bv_page,
|
|
|
|
READ, false) == 0)
|
|
|
|
md_error(mddev, rdev);
|
2011-07-27 05:00:36 +04:00
|
|
|
else
|
|
|
|
atomic_add(s, &rdev->corrected_errors);
|
2011-05-11 08:48:56 +04:00
|
|
|
}
|
2011-05-11 08:40:44 +04:00
|
|
|
sectors -= s;
|
|
|
|
sect += s;
|
|
|
|
idx ++;
|
|
|
|
}
|
2011-05-11 08:48:56 +04:00
|
|
|
set_bit(R1BIO_Uptodate, &r1_bio->state);
|
2011-05-11 08:50:37 +04:00
|
|
|
set_bit(BIO_UPTODATE, &bio->bi_flags);
|
2011-05-11 08:40:44 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int process_checks(r1bio_t *r1_bio)
|
|
|
|
{
|
|
|
|
/* We have read all readable devices. If we haven't
|
|
|
|
* got the block, then there is no hope left.
|
|
|
|
* If we have, then we want to do a comparison
|
|
|
|
* and skip the write if everything is the same.
|
|
|
|
* If any blocks failed to read, then we need to
|
|
|
|
* attempt an over-write
|
|
|
|
*/
|
|
|
|
mddev_t *mddev = r1_bio->mddev;
|
|
|
|
conf_t *conf = mddev->private;
|
|
|
|
int primary;
|
|
|
|
int i;
|
|
|
|
|
2011-05-11 08:48:56 +04:00
|
|
|
for (primary = 0; primary < conf->raid_disks; primary++)
|
2011-05-11 08:40:44 +04:00
|
|
|
if (r1_bio->bios[primary]->bi_end_io == end_sync_read &&
|
|
|
|
test_bit(BIO_UPTODATE, &r1_bio->bios[primary]->bi_flags)) {
|
|
|
|
r1_bio->bios[primary]->bi_end_io = NULL;
|
|
|
|
rdev_dec_pending(conf->mirrors[primary].rdev, mddev);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
r1_bio->read_disk = primary;
|
2011-05-11 08:48:56 +04:00
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
|
|
|
int j;
|
|
|
|
int vcnt = r1_bio->sectors >> (PAGE_SHIFT- 9);
|
|
|
|
struct bio *pbio = r1_bio->bios[primary];
|
|
|
|
struct bio *sbio = r1_bio->bios[i];
|
|
|
|
int size;
|
2011-05-11 08:40:44 +04:00
|
|
|
|
2011-05-11 08:48:56 +04:00
|
|
|
if (r1_bio->bios[i]->bi_end_io != end_sync_read)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (test_bit(BIO_UPTODATE, &sbio->bi_flags)) {
|
|
|
|
for (j = vcnt; j-- ; ) {
|
|
|
|
struct page *p, *s;
|
|
|
|
p = pbio->bi_io_vec[j].bv_page;
|
|
|
|
s = sbio->bi_io_vec[j].bv_page;
|
|
|
|
if (memcmp(page_address(p),
|
|
|
|
page_address(s),
|
|
|
|
PAGE_SIZE))
|
|
|
|
break;
|
2006-01-06 11:20:22 +03:00
|
|
|
}
|
2011-05-11 08:48:56 +04:00
|
|
|
} else
|
|
|
|
j = 0;
|
|
|
|
if (j >= 0)
|
|
|
|
mddev->resync_mismatches += r1_bio->sectors;
|
|
|
|
if (j < 0 || (test_bit(MD_RECOVERY_CHECK, &mddev->recovery)
|
|
|
|
&& test_bit(BIO_UPTODATE, &sbio->bi_flags))) {
|
|
|
|
/* No need to write to this device. */
|
|
|
|
sbio->bi_end_io = NULL;
|
|
|
|
rdev_dec_pending(conf->mirrors[i].rdev, mddev);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* fixup the bio for reuse */
|
|
|
|
sbio->bi_vcnt = vcnt;
|
|
|
|
sbio->bi_size = r1_bio->sectors << 9;
|
|
|
|
sbio->bi_idx = 0;
|
|
|
|
sbio->bi_phys_segments = 0;
|
|
|
|
sbio->bi_flags &= ~(BIO_POOL_MASK - 1);
|
|
|
|
sbio->bi_flags |= 1 << BIO_UPTODATE;
|
|
|
|
sbio->bi_next = NULL;
|
|
|
|
sbio->bi_sector = r1_bio->sector +
|
|
|
|
conf->mirrors[i].rdev->data_offset;
|
|
|
|
sbio->bi_bdev = conf->mirrors[i].rdev->bdev;
|
|
|
|
size = sbio->bi_size;
|
|
|
|
for (j = 0; j < vcnt ; j++) {
|
|
|
|
struct bio_vec *bi;
|
|
|
|
bi = &sbio->bi_io_vec[j];
|
|
|
|
bi->bv_offset = 0;
|
|
|
|
if (size > PAGE_SIZE)
|
|
|
|
bi->bv_len = PAGE_SIZE;
|
|
|
|
else
|
|
|
|
bi->bv_len = size;
|
|
|
|
size -= PAGE_SIZE;
|
|
|
|
memcpy(page_address(bi->bv_page),
|
|
|
|
page_address(pbio->bi_io_vec[j].bv_page),
|
|
|
|
PAGE_SIZE);
|
2006-01-06 11:20:22 +03:00
|
|
|
}
|
2011-05-11 08:48:56 +04:00
|
|
|
}
|
2011-05-11 08:40:44 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void sync_request_write(mddev_t *mddev, r1bio_t *r1_bio)
|
|
|
|
{
|
|
|
|
conf_t *conf = mddev->private;
|
|
|
|
int i;
|
|
|
|
int disks = conf->raid_disks;
|
|
|
|
struct bio *bio, *wbio;
|
|
|
|
|
|
|
|
bio = r1_bio->bios[r1_bio->read_disk];
|
|
|
|
|
|
|
|
if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
|
|
|
|
/* ouch - failed to read all of that. */
|
|
|
|
if (!fix_sync_read_error(r1_bio))
|
|
|
|
return;
|
2011-05-11 08:50:37 +04:00
|
|
|
|
|
|
|
if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
|
|
|
|
if (process_checks(r1_bio) < 0)
|
|
|
|
return;
|
2006-01-06 11:20:26 +03:00
|
|
|
/*
|
|
|
|
* schedule writes
|
|
|
|
*/
|
2005-04-17 02:20:36 +04:00
|
|
|
atomic_set(&r1_bio->remaining, 1);
|
|
|
|
for (i = 0; i < disks ; i++) {
|
|
|
|
wbio = r1_bio->bios[i];
|
2006-01-06 11:20:21 +03:00
|
|
|
if (wbio->bi_end_io == NULL ||
|
|
|
|
(wbio->bi_end_io == end_sync_read &&
|
|
|
|
(i == r1_bio->read_disk ||
|
|
|
|
!test_bit(MD_RECOVERY_SYNC, &mddev->recovery))))
|
2005-04-17 02:20:36 +04:00
|
|
|
continue;
|
|
|
|
|
2006-01-06 11:20:21 +03:00
|
|
|
wbio->bi_rw = WRITE;
|
|
|
|
wbio->bi_end_io = end_sync_write;
|
2005-04-17 02:20:36 +04:00
|
|
|
atomic_inc(&r1_bio->remaining);
|
|
|
|
md_sync_acct(conf->mirrors[i].rdev->bdev, wbio->bi_size >> 9);
|
2005-06-22 04:17:23 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
generic_make_request(wbio);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (atomic_dec_and_test(&r1_bio->remaining)) {
|
2005-06-22 04:17:23 +04:00
|
|
|
/* if we're here, all write(s) have completed, so clean up */
|
2005-04-17 02:20:36 +04:00
|
|
|
md_done_sync(mddev, r1_bio->sectors, 1);
|
|
|
|
put_buf(r1_bio);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is a kernel thread which:
|
|
|
|
*
|
|
|
|
* 1. Retries failed read operations on working mirrors.
|
|
|
|
* 2. Updates the raid superblock when problems encounter.
|
|
|
|
* 3. Performs writes following reads for array syncronising.
|
|
|
|
*/
|
|
|
|
|
2006-10-03 12:15:51 +04:00
|
|
|
static void fix_read_error(conf_t *conf, int read_disk,
|
|
|
|
sector_t sect, int sectors)
|
|
|
|
{
|
|
|
|
mddev_t *mddev = conf->mddev;
|
|
|
|
while(sectors) {
|
|
|
|
int s = sectors;
|
|
|
|
int d = read_disk;
|
|
|
|
int success = 0;
|
|
|
|
int start;
|
|
|
|
mdk_rdev_t *rdev;
|
|
|
|
|
|
|
|
if (s > (PAGE_SIZE>>9))
|
|
|
|
s = PAGE_SIZE >> 9;
|
|
|
|
|
|
|
|
do {
|
|
|
|
/* Note: no rcu protection needed here
|
|
|
|
* as this is synchronous in the raid1d thread
|
|
|
|
* which is the thread that might remove
|
|
|
|
* a device. If raid1d ever becomes multi-threaded....
|
|
|
|
*/
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
|
|
|
if (rdev &&
|
|
|
|
test_bit(In_sync, &rdev->flags) &&
|
2011-01-14 01:14:33 +03:00
|
|
|
sync_page_io(rdev, sect, s<<9,
|
|
|
|
conf->tmppage, READ, false))
|
2006-10-03 12:15:51 +04:00
|
|
|
success = 1;
|
|
|
|
else {
|
|
|
|
d++;
|
|
|
|
if (d == conf->raid_disks)
|
|
|
|
d = 0;
|
|
|
|
}
|
|
|
|
} while (!success && d != read_disk);
|
|
|
|
|
|
|
|
if (!success) {
|
|
|
|
/* Cannot read from anywhere -- bye bye array */
|
|
|
|
md_error(mddev, conf->mirrors[read_disk].rdev);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* write it back and re-read */
|
|
|
|
start = d;
|
|
|
|
while (d != read_disk) {
|
|
|
|
if (d==0)
|
|
|
|
d = conf->raid_disks;
|
|
|
|
d--;
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
|
|
|
if (rdev &&
|
|
|
|
test_bit(In_sync, &rdev->flags)) {
|
2011-01-14 01:14:33 +03:00
|
|
|
if (sync_page_io(rdev, sect, s<<9,
|
|
|
|
conf->tmppage, WRITE, false)
|
2006-10-03 12:15:51 +04:00
|
|
|
== 0)
|
|
|
|
/* Well, this device is dead */
|
|
|
|
md_error(mddev, rdev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
d = start;
|
|
|
|
while (d != read_disk) {
|
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
if (d==0)
|
|
|
|
d = conf->raid_disks;
|
|
|
|
d--;
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
|
|
|
if (rdev &&
|
|
|
|
test_bit(In_sync, &rdev->flags)) {
|
2011-01-14 01:14:33 +03:00
|
|
|
if (sync_page_io(rdev, sect, s<<9,
|
|
|
|
conf->tmppage, READ, false)
|
2006-10-03 12:15:51 +04:00
|
|
|
== 0)
|
|
|
|
/* Well, this device is dead */
|
|
|
|
md_error(mddev, rdev);
|
|
|
|
else {
|
|
|
|
atomic_add(s, &rdev->corrected_errors);
|
|
|
|
printk(KERN_INFO
|
2010-05-03 08:30:35 +04:00
|
|
|
"md/raid1:%s: read error corrected "
|
2006-10-03 12:15:51 +04:00
|
|
|
"(%d sectors at %llu on %s)\n",
|
|
|
|
mdname(mddev), s,
|
2006-10-28 21:38:32 +04:00
|
|
|
(unsigned long long)(sect +
|
|
|
|
rdev->data_offset),
|
2006-10-03 12:15:51 +04:00
|
|
|
bdevname(rdev->bdev, b));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
sectors -= s;
|
|
|
|
sect += s;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
static void raid1d(mddev_t *mddev)
|
|
|
|
{
|
|
|
|
r1bio_t *r1_bio;
|
|
|
|
struct bio *bio;
|
|
|
|
unsigned long flags;
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
struct list_head *head = &conf->retry_list;
|
|
|
|
mdk_rdev_t *rdev;
|
2011-04-18 12:25:41 +04:00
|
|
|
struct blk_plug plug;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
md_check_recovery(mddev);
|
2011-04-18 12:25:41 +04:00
|
|
|
|
|
|
|
blk_start_plug(&plug);
|
2005-04-17 02:20:36 +04:00
|
|
|
for (;;) {
|
|
|
|
char b[BDEVNAME_SIZE];
|
2005-06-22 04:17:23 +04:00
|
|
|
|
2011-04-18 12:25:43 +04:00
|
|
|
if (atomic_read(&mddev->plug_cnt) == 0)
|
|
|
|
flush_pending_writes(conf);
|
2005-06-22 04:17:23 +04:00
|
|
|
|
2008-03-05 01:29:29 +03:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
if (list_empty(head)) {
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2005-04-17 02:20:36 +04:00
|
|
|
break;
|
2008-03-05 01:29:29 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
r1_bio = list_entry(head->prev, r1bio_t, retry_list);
|
|
|
|
list_del(head->prev);
|
2006-01-06 11:20:19 +03:00
|
|
|
conf->nr_queued--;
|
2005-04-17 02:20:36 +04:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
|
|
|
|
|
|
|
mddev = r1_bio->mddev;
|
2009-06-16 10:54:21 +04:00
|
|
|
conf = mddev->private;
|
2011-03-10 10:52:07 +03:00
|
|
|
if (test_bit(R1BIO_IsSync, &r1_bio->state))
|
2005-04-17 02:20:36 +04:00
|
|
|
sync_request_write(mddev, r1_bio);
|
2011-03-10 10:52:07 +03:00
|
|
|
else {
|
2005-04-17 02:20:36 +04:00
|
|
|
int disk;
|
2006-01-06 11:20:19 +03:00
|
|
|
|
|
|
|
/* we got a read error. Maybe the drive is bad. Maybe just
|
|
|
|
* the block and we can fix it.
|
|
|
|
* We freeze all other IO, and try reading the block from
|
|
|
|
* other devices. When we find one, we re-write
|
|
|
|
* and check it that fixes the read error.
|
|
|
|
* This is all done synchronously while the array is
|
|
|
|
* frozen
|
|
|
|
*/
|
2006-10-03 12:15:51 +04:00
|
|
|
if (mddev->ro == 0) {
|
|
|
|
freeze_array(conf);
|
|
|
|
fix_read_error(conf, r1_bio->read_disk,
|
|
|
|
r1_bio->sector,
|
|
|
|
r1_bio->sectors);
|
|
|
|
unfreeze_array(conf);
|
2009-12-01 09:30:59 +03:00
|
|
|
} else
|
|
|
|
md_error(mddev,
|
|
|
|
conf->mirrors[r1_bio->read_disk].rdev);
|
2006-01-06 11:20:19 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
bio = r1_bio->bios[r1_bio->read_disk];
|
2009-12-01 09:30:59 +03:00
|
|
|
if ((disk=read_balance(conf, r1_bio)) == -1) {
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_ALERT "md/raid1:%s: %s: unrecoverable I/O"
|
2005-04-17 02:20:36 +04:00
|
|
|
" read error for block %llu\n",
|
2010-05-03 08:30:35 +04:00
|
|
|
mdname(mddev),
|
2005-04-17 02:20:36 +04:00
|
|
|
bdevname(bio->bi_bdev,b),
|
|
|
|
(unsigned long long)r1_bio->sector);
|
|
|
|
raid_end_bio_io(r1_bio);
|
|
|
|
} else {
|
2010-08-18 10:16:05 +04:00
|
|
|
const unsigned long do_sync = r1_bio->master_bio->bi_rw & REQ_SYNC;
|
2006-01-06 11:20:23 +03:00
|
|
|
r1_bio->bios[r1_bio->read_disk] =
|
|
|
|
mddev->ro ? IO_BLOCKED : NULL;
|
2005-04-17 02:20:36 +04:00
|
|
|
r1_bio->read_disk = disk;
|
|
|
|
bio_put(bio);
|
2010-10-26 11:31:13 +04:00
|
|
|
bio = bio_clone_mddev(r1_bio->master_bio,
|
|
|
|
GFP_NOIO, mddev);
|
2005-04-17 02:20:36 +04:00
|
|
|
r1_bio->bios[r1_bio->read_disk] = bio;
|
|
|
|
rdev = conf->mirrors[disk].rdev;
|
2011-07-27 05:00:36 +04:00
|
|
|
printk_ratelimited(
|
|
|
|
KERN_ERR
|
|
|
|
"md/raid1:%s: redirecting sector %llu"
|
|
|
|
" to other mirror: %s\n",
|
|
|
|
mdname(mddev),
|
|
|
|
(unsigned long long)r1_bio->sector,
|
|
|
|
bdevname(rdev->bdev, b));
|
2005-04-17 02:20:36 +04:00
|
|
|
bio->bi_sector = r1_bio->sector + rdev->data_offset;
|
|
|
|
bio->bi_bdev = rdev->bdev;
|
|
|
|
bio->bi_end_io = raid1_end_read_request;
|
2010-08-07 20:20:39 +04:00
|
|
|
bio->bi_rw = READ | do_sync;
|
2005-04-17 02:20:36 +04:00
|
|
|
bio->bi_private = r1_bio;
|
|
|
|
generic_make_request(bio);
|
|
|
|
}
|
|
|
|
}
|
2009-10-16 08:55:32 +04:00
|
|
|
cond_resched();
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2011-04-18 12:25:41 +04:00
|
|
|
blk_finish_plug(&plug);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int init_resync(conf_t *conf)
|
|
|
|
{
|
|
|
|
int buffs;
|
|
|
|
|
|
|
|
buffs = RESYNC_WINDOW / RESYNC_BLOCK_SIZE;
|
2006-04-01 03:08:49 +04:00
|
|
|
BUG_ON(conf->r1buf_pool);
|
2005-04-17 02:20:36 +04:00
|
|
|
conf->r1buf_pool = mempool_create(buffs, r1buf_pool_alloc, r1buf_pool_free,
|
|
|
|
conf->poolinfo);
|
|
|
|
if (!conf->r1buf_pool)
|
|
|
|
return -ENOMEM;
|
|
|
|
conf->next_resync = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* perform a "sync" on one "block"
|
|
|
|
*
|
|
|
|
* We need to make sure that no normal I/O request - particularly write
|
|
|
|
* requests - conflict with active sync requests.
|
|
|
|
*
|
|
|
|
* This is achieved by tracking pending requests and a 'barrier' concept
|
|
|
|
* that can be installed to exclude normal IO requests.
|
|
|
|
*/
|
|
|
|
|
2005-06-22 04:17:13 +04:00
|
|
|
static sector_t sync_request(mddev_t *mddev, sector_t sector_nr, int *skipped, int go_faster)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
r1bio_t *r1_bio;
|
|
|
|
struct bio *bio;
|
|
|
|
sector_t max_sector, nr_sectors;
|
2006-01-06 11:20:21 +03:00
|
|
|
int disk = -1;
|
2005-04-17 02:20:36 +04:00
|
|
|
int i;
|
2006-01-06 11:20:21 +03:00
|
|
|
int wonly = -1;
|
|
|
|
int write_targets = 0, read_targets = 0;
|
2010-10-19 03:03:39 +04:00
|
|
|
sector_t sync_blocks;
|
2005-08-04 23:53:34 +04:00
|
|
|
int still_degraded = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
if (!conf->r1buf_pool)
|
|
|
|
if (init_resync(conf))
|
2005-06-22 04:17:13 +04:00
|
|
|
return 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-03-31 07:33:13 +04:00
|
|
|
max_sector = mddev->dev_sectors;
|
2005-04-17 02:20:36 +04:00
|
|
|
if (sector_nr >= max_sector) {
|
2005-06-22 04:17:23 +04:00
|
|
|
/* If we aborted, we need to abort the
|
|
|
|
* sync on the 'current' bitmap chunk (there will
|
|
|
|
* only be one in raid1 resync.
|
|
|
|
* We can find the current addess in mddev->curr_resync
|
|
|
|
*/
|
2005-07-15 14:56:35 +04:00
|
|
|
if (mddev->curr_resync < max_sector) /* aborted */
|
|
|
|
bitmap_end_sync(mddev->bitmap, mddev->curr_resync,
|
2005-06-22 04:17:23 +04:00
|
|
|
&sync_blocks, 1);
|
2005-07-15 14:56:35 +04:00
|
|
|
else /* completed sync */
|
2005-06-22 04:17:23 +04:00
|
|
|
conf->fullsync = 0;
|
2005-07-15 14:56:35 +04:00
|
|
|
|
|
|
|
bitmap_close_sync(mddev->bitmap);
|
2005-04-17 02:20:36 +04:00
|
|
|
close_sync(conf);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-06-26 11:27:56 +04:00
|
|
|
if (mddev->bitmap == NULL &&
|
|
|
|
mddev->recovery_cp == MaxSector &&
|
2006-08-27 12:23:50 +04:00
|
|
|
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
|
2006-06-26 11:27:56 +04:00
|
|
|
conf->fullsync == 0) {
|
|
|
|
*skipped = 1;
|
|
|
|
return max_sector - sector_nr;
|
|
|
|
}
|
2006-08-27 12:23:50 +04:00
|
|
|
/* before building a request, check if we can skip these blocks..
|
|
|
|
* This call the bitmap_start_sync doesn't actually record anything
|
|
|
|
*/
|
2005-08-04 23:53:34 +04:00
|
|
|
if (!bitmap_start_sync(mddev->bitmap, sector_nr, &sync_blocks, 1) &&
|
2005-11-09 08:39:38 +03:00
|
|
|
!conf->fullsync && !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
|
2005-06-22 04:17:23 +04:00
|
|
|
/* We can skip this block, and probably several more */
|
|
|
|
*skipped = 1;
|
|
|
|
return sync_blocks;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
2006-01-06 11:20:12 +03:00
|
|
|
* If there is non-resync activity waiting for a turn,
|
|
|
|
* and resync is going fast enough,
|
|
|
|
* then let it though before starting on this new sync request.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2006-01-06 11:20:12 +03:00
|
|
|
if (!go_faster && conf->nr_waiting)
|
2005-04-17 02:20:36 +04:00
|
|
|
msleep_interruptible(1000);
|
2006-01-06 11:20:12 +03:00
|
|
|
|
2008-02-06 12:39:50 +03:00
|
|
|
bitmap_cond_end_sync(mddev->bitmap, sector_nr);
|
2010-10-26 10:41:22 +04:00
|
|
|
r1_bio = mempool_alloc(conf->r1buf_pool, GFP_NOIO);
|
2006-01-06 11:20:12 +03:00
|
|
|
raise_barrier(conf);
|
|
|
|
|
|
|
|
conf->next_resync = sector_nr;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-01-06 11:20:21 +03:00
|
|
|
rcu_read_lock();
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
2006-01-06 11:20:21 +03:00
|
|
|
* If we get a correctably read error during resync or recovery,
|
|
|
|
* we might want to read from a different device. So we
|
|
|
|
* flag all drives that could conceivably be read from for READ,
|
|
|
|
* and any others (which will be non-In_sync devices) for WRITE.
|
|
|
|
* If a read fails, we try reading from something else for which READ
|
|
|
|
* is OK.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
|
|
|
|
r1_bio->mddev = mddev;
|
|
|
|
r1_bio->sector = sector_nr;
|
2005-06-22 04:17:23 +04:00
|
|
|
r1_bio->state = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
set_bit(R1BIO_IsSync, &r1_bio->state);
|
|
|
|
|
|
|
|
for (i=0; i < conf->raid_disks; i++) {
|
2006-01-06 11:20:21 +03:00
|
|
|
mdk_rdev_t *rdev;
|
2005-04-17 02:20:36 +04:00
|
|
|
bio = r1_bio->bios[i];
|
|
|
|
|
|
|
|
/* take from bio_init */
|
|
|
|
bio->bi_next = NULL;
|
2010-10-07 05:00:50 +04:00
|
|
|
bio->bi_flags &= ~(BIO_POOL_MASK-1);
|
2005-04-17 02:20:36 +04:00
|
|
|
bio->bi_flags |= 1 << BIO_UPTODATE;
|
2010-10-07 05:00:50 +04:00
|
|
|
bio->bi_comp_cpu = -1;
|
2006-12-13 11:34:13 +03:00
|
|
|
bio->bi_rw = READ;
|
2005-04-17 02:20:36 +04:00
|
|
|
bio->bi_vcnt = 0;
|
|
|
|
bio->bi_idx = 0;
|
|
|
|
bio->bi_phys_segments = 0;
|
|
|
|
bio->bi_size = 0;
|
|
|
|
bio->bi_end_io = NULL;
|
|
|
|
bio->bi_private = NULL;
|
|
|
|
|
2006-01-06 11:20:21 +03:00
|
|
|
rdev = rcu_dereference(conf->mirrors[i].rdev);
|
|
|
|
if (rdev == NULL ||
|
|
|
|
test_bit(Faulty, &rdev->flags)) {
|
2005-08-04 23:53:34 +04:00
|
|
|
still_degraded = 1;
|
|
|
|
continue;
|
2006-01-06 11:20:21 +03:00
|
|
|
} else if (!test_bit(In_sync, &rdev->flags)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
bio->bi_rw = WRITE;
|
|
|
|
bio->bi_end_io = end_sync_write;
|
|
|
|
write_targets ++;
|
2006-01-06 11:20:21 +03:00
|
|
|
} else {
|
|
|
|
/* may need to read from here */
|
|
|
|
bio->bi_rw = READ;
|
|
|
|
bio->bi_end_io = end_sync_read;
|
|
|
|
if (test_bit(WriteMostly, &rdev->flags)) {
|
|
|
|
if (wonly < 0)
|
|
|
|
wonly = i;
|
|
|
|
} else {
|
|
|
|
if (disk < 0)
|
|
|
|
disk = i;
|
|
|
|
}
|
|
|
|
read_targets++;
|
|
|
|
}
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
|
|
|
bio->bi_sector = sector_nr + rdev->data_offset;
|
|
|
|
bio->bi_bdev = rdev->bdev;
|
2005-04-17 02:20:36 +04:00
|
|
|
bio->bi_private = r1_bio;
|
|
|
|
}
|
2006-01-06 11:20:21 +03:00
|
|
|
rcu_read_unlock();
|
|
|
|
if (disk < 0)
|
|
|
|
disk = wonly;
|
|
|
|
r1_bio->read_disk = disk;
|
2005-06-22 04:17:23 +04:00
|
|
|
|
2006-01-06 11:20:21 +03:00
|
|
|
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) && read_targets > 0)
|
|
|
|
/* extra read targets are also write targets */
|
|
|
|
write_targets += read_targets-1;
|
|
|
|
|
|
|
|
if (write_targets == 0 || read_targets == 0) {
|
2005-04-17 02:20:36 +04:00
|
|
|
/* There is nowhere to write, so all non-sync
|
|
|
|
* drives must be failed - so we are finished
|
|
|
|
*/
|
2005-06-22 04:17:13 +04:00
|
|
|
sector_t rv = max_sector - sector_nr;
|
|
|
|
*skipped = 1;
|
2005-04-17 02:20:36 +04:00
|
|
|
put_buf(r1_bio);
|
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2008-02-06 12:39:52 +03:00
|
|
|
if (max_sector > mddev->resync_max)
|
|
|
|
max_sector = mddev->resync_max; /* Don't do IO beyond here */
|
2005-04-17 02:20:36 +04:00
|
|
|
nr_sectors = 0;
|
2005-06-22 04:17:24 +04:00
|
|
|
sync_blocks = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
do {
|
|
|
|
struct page *page;
|
|
|
|
int len = PAGE_SIZE;
|
|
|
|
if (sector_nr + (len>>9) > max_sector)
|
|
|
|
len = (max_sector - sector_nr) << 9;
|
|
|
|
if (len == 0)
|
|
|
|
break;
|
2005-07-15 14:56:35 +04:00
|
|
|
if (sync_blocks == 0) {
|
|
|
|
if (!bitmap_start_sync(mddev->bitmap, sector_nr,
|
2005-11-09 08:39:38 +03:00
|
|
|
&sync_blocks, still_degraded) &&
|
|
|
|
!conf->fullsync &&
|
|
|
|
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
|
2005-07-15 14:56:35 +04:00
|
|
|
break;
|
2006-04-01 03:08:49 +04:00
|
|
|
BUG_ON(sync_blocks < (PAGE_SIZE>>9));
|
2010-10-07 04:54:46 +04:00
|
|
|
if ((len >> 9) > sync_blocks)
|
2005-07-15 14:56:35 +04:00
|
|
|
len = sync_blocks<<9;
|
2005-06-22 04:17:23 +04:00
|
|
|
}
|
2005-06-22 04:17:23 +04:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
for (i=0 ; i < conf->raid_disks; i++) {
|
|
|
|
bio = r1_bio->bios[i];
|
|
|
|
if (bio->bi_end_io) {
|
2006-01-06 11:20:26 +03:00
|
|
|
page = bio->bi_io_vec[bio->bi_vcnt].bv_page;
|
2005-04-17 02:20:36 +04:00
|
|
|
if (bio_add_page(bio, page, len, 0) == 0) {
|
|
|
|
/* stop here */
|
2006-01-06 11:20:26 +03:00
|
|
|
bio->bi_io_vec[bio->bi_vcnt].bv_page = page;
|
2005-04-17 02:20:36 +04:00
|
|
|
while (i > 0) {
|
|
|
|
i--;
|
|
|
|
bio = r1_bio->bios[i];
|
2005-07-15 14:56:35 +04:00
|
|
|
if (bio->bi_end_io==NULL)
|
|
|
|
continue;
|
2005-04-17 02:20:36 +04:00
|
|
|
/* remove last page from this bio */
|
|
|
|
bio->bi_vcnt--;
|
|
|
|
bio->bi_size -= len;
|
|
|
|
bio->bi_flags &= ~(1<< BIO_SEG_VALID);
|
|
|
|
}
|
|
|
|
goto bio_full;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
nr_sectors += len>>9;
|
|
|
|
sector_nr += len>>9;
|
2005-06-22 04:17:23 +04:00
|
|
|
sync_blocks -= (len>>9);
|
2005-04-17 02:20:36 +04:00
|
|
|
} while (r1_bio->bios[disk]->bi_vcnt < RESYNC_PAGES);
|
|
|
|
bio_full:
|
|
|
|
r1_bio->sectors = nr_sectors;
|
|
|
|
|
2006-01-06 11:20:26 +03:00
|
|
|
/* For a user-requested sync, we read all readable devices and do a
|
|
|
|
* compare
|
|
|
|
*/
|
|
|
|
if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
|
|
|
|
atomic_set(&r1_bio->remaining, read_targets);
|
|
|
|
for (i=0; i<conf->raid_disks; i++) {
|
|
|
|
bio = r1_bio->bios[i];
|
|
|
|
if (bio->bi_end_io == end_sync_read) {
|
2006-09-01 08:27:36 +04:00
|
|
|
md_sync_acct(bio->bi_bdev, nr_sectors);
|
2006-01-06 11:20:26 +03:00
|
|
|
generic_make_request(bio);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
atomic_set(&r1_bio->remaining, 1);
|
|
|
|
bio = r1_bio->bios[r1_bio->read_disk];
|
2006-09-01 08:27:36 +04:00
|
|
|
md_sync_acct(bio->bi_bdev, nr_sectors);
|
2006-01-06 11:20:26 +03:00
|
|
|
generic_make_request(bio);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-01-06 11:20:26 +03:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
return nr_sectors;
|
|
|
|
}
|
|
|
|
|
2009-03-18 04:10:40 +03:00
|
|
|
static sector_t raid1_size(mddev_t *mddev, sector_t sectors, int raid_disks)
|
|
|
|
{
|
|
|
|
if (sectors)
|
|
|
|
return sectors;
|
|
|
|
|
|
|
|
return mddev->dev_sectors;
|
|
|
|
}
|
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
static conf_t *setup_conf(mddev_t *mddev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
conf_t *conf;
|
2009-12-14 04:49:51 +03:00
|
|
|
int i;
|
2005-04-17 02:20:36 +04:00
|
|
|
mirror_info_t *disk;
|
|
|
|
mdk_rdev_t *rdev;
|
2009-12-14 04:49:51 +03:00
|
|
|
int err = -ENOMEM;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-01-06 11:20:32 +03:00
|
|
|
conf = kzalloc(sizeof(conf_t), GFP_KERNEL);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!conf)
|
2009-12-14 04:49:51 +03:00
|
|
|
goto abort;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-01-06 11:20:32 +03:00
|
|
|
conf->mirrors = kzalloc(sizeof(struct mirror_info)*mddev->raid_disks,
|
2005-04-17 02:20:36 +04:00
|
|
|
GFP_KERNEL);
|
|
|
|
if (!conf->mirrors)
|
2009-12-14 04:49:51 +03:00
|
|
|
goto abort;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-01-06 11:20:19 +03:00
|
|
|
conf->tmppage = alloc_page(GFP_KERNEL);
|
|
|
|
if (!conf->tmppage)
|
2009-12-14 04:49:51 +03:00
|
|
|
goto abort;
|
2006-01-06 11:20:19 +03:00
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
conf->poolinfo = kzalloc(sizeof(*conf->poolinfo), GFP_KERNEL);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!conf->poolinfo)
|
2009-12-14 04:49:51 +03:00
|
|
|
goto abort;
|
2005-04-17 02:20:36 +04:00
|
|
|
conf->poolinfo->raid_disks = mddev->raid_disks;
|
|
|
|
conf->r1bio_pool = mempool_create(NR_RAID1_BIOS, r1bio_pool_alloc,
|
|
|
|
r1bio_pool_free,
|
|
|
|
conf->poolinfo);
|
|
|
|
if (!conf->r1bio_pool)
|
2009-12-14 04:49:51 +03:00
|
|
|
goto abort;
|
|
|
|
|
2009-10-16 08:55:44 +04:00
|
|
|
conf->poolinfo->mddev = mddev;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2008-05-15 03:05:54 +04:00
|
|
|
spin_lock_init(&conf->device_lock);
|
2009-01-09 00:31:08 +03:00
|
|
|
list_for_each_entry(rdev, &mddev->disks, same_set) {
|
2009-12-14 04:49:51 +03:00
|
|
|
int disk_idx = rdev->raid_disk;
|
2005-04-17 02:20:36 +04:00
|
|
|
if (disk_idx >= mddev->raid_disks
|
|
|
|
|| disk_idx < 0)
|
|
|
|
continue;
|
|
|
|
disk = conf->mirrors + disk_idx;
|
|
|
|
|
|
|
|
disk->rdev = rdev;
|
|
|
|
|
|
|
|
disk->head_position = 0;
|
|
|
|
}
|
|
|
|
conf->raid_disks = mddev->raid_disks;
|
|
|
|
conf->mddev = mddev;
|
|
|
|
INIT_LIST_HEAD(&conf->retry_list);
|
|
|
|
|
|
|
|
spin_lock_init(&conf->resync_lock);
|
2006-01-06 11:20:12 +03:00
|
|
|
init_waitqueue_head(&conf->wait_barrier);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-06-22 04:17:23 +04:00
|
|
|
bio_list_init(&conf->pending_bio_list);
|
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
conf->last_used = -1;
|
2005-04-17 02:20:36 +04:00
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
|
|
|
|
|
|
|
disk = conf->mirrors + i;
|
|
|
|
|
2006-06-26 11:27:40 +04:00
|
|
|
if (!disk->rdev ||
|
|
|
|
!test_bit(In_sync, &disk->rdev->flags)) {
|
2005-04-17 02:20:36 +04:00
|
|
|
disk->head_position = 0;
|
2007-08-23 01:01:52 +04:00
|
|
|
if (disk->rdev)
|
|
|
|
conf->fullsync = 1;
|
2009-12-14 04:49:51 +03:00
|
|
|
} else if (conf->last_used < 0)
|
|
|
|
/*
|
|
|
|
* The first working device is used as a
|
|
|
|
* starting point to read balancing.
|
|
|
|
*/
|
|
|
|
conf->last_used = i;
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2009-12-14 04:49:51 +03:00
|
|
|
|
|
|
|
err = -EIO;
|
|
|
|
if (conf->last_used < 0) {
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_ERR "md/raid1:%s: no operational mirrors\n",
|
2009-12-14 04:49:51 +03:00
|
|
|
mdname(mddev));
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
err = -ENOMEM;
|
|
|
|
conf->thread = md_register_thread(raid1d, mddev, NULL);
|
|
|
|
if (!conf->thread) {
|
|
|
|
printk(KERN_ERR
|
2010-05-03 08:30:35 +04:00
|
|
|
"md/raid1:%s: couldn't allocate thread\n",
|
2009-12-14 04:49:51 +03:00
|
|
|
mdname(mddev));
|
|
|
|
goto abort;
|
2006-10-03 12:15:52 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
return conf;
|
|
|
|
|
|
|
|
abort:
|
|
|
|
if (conf) {
|
|
|
|
if (conf->r1bio_pool)
|
|
|
|
mempool_destroy(conf->r1bio_pool);
|
|
|
|
kfree(conf->mirrors);
|
|
|
|
safe_put_page(conf->tmppage);
|
|
|
|
kfree(conf->poolinfo);
|
|
|
|
kfree(conf);
|
|
|
|
}
|
|
|
|
return ERR_PTR(err);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int run(mddev_t *mddev)
|
|
|
|
{
|
|
|
|
conf_t *conf;
|
|
|
|
int i;
|
|
|
|
mdk_rdev_t *rdev;
|
|
|
|
|
|
|
|
if (mddev->level != 1) {
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_ERR "md/raid1:%s: raid level not set to mirroring (%d)\n",
|
2009-12-14 04:49:51 +03:00
|
|
|
mdname(mddev), mddev->level);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
if (mddev->reshape_position != MaxSector) {
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_ERR "md/raid1:%s: reshape_position set but not supported\n",
|
2009-12-14 04:49:51 +03:00
|
|
|
mdname(mddev));
|
|
|
|
return -EIO;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
2009-12-14 04:49:51 +03:00
|
|
|
* copy the already verified devices into our private RAID1
|
|
|
|
* bookkeeping area. [whatever we allocate in run(),
|
|
|
|
* should be freed in stop()]
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2009-12-14 04:49:51 +03:00
|
|
|
if (mddev->private == NULL)
|
|
|
|
conf = setup_conf(mddev);
|
|
|
|
else
|
|
|
|
conf = mddev->private;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
if (IS_ERR(conf))
|
|
|
|
return PTR_ERR(conf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
list_for_each_entry(rdev, &mddev->disks, same_set) {
|
2011-06-08 02:50:35 +04:00
|
|
|
if (!mddev->gendisk)
|
|
|
|
continue;
|
2009-12-14 04:49:51 +03:00
|
|
|
disk_stack_limits(mddev->gendisk, rdev->bdev,
|
|
|
|
rdev->data_offset << 9);
|
|
|
|
/* as we don't honour merge_bvec_fn, we must never risk
|
2010-03-08 08:44:38 +03:00
|
|
|
* violating it, so limit ->max_segments to 1 lying within
|
|
|
|
* a single page, as a one page request is never in violation.
|
2009-12-14 04:49:51 +03:00
|
|
|
*/
|
2010-03-08 08:44:38 +03:00
|
|
|
if (rdev->bdev->bd_disk->queue->merge_bvec_fn) {
|
|
|
|
blk_queue_max_segments(mddev->queue, 1);
|
|
|
|
blk_queue_segment_boundary(mddev->queue,
|
|
|
|
PAGE_CACHE_SIZE - 1);
|
|
|
|
}
|
2011-07-28 05:31:47 +04:00
|
|
|
if (rdev->badblocks.count) {
|
|
|
|
printk(KERN_ERR "md/raid1: Cannot handle bad blocks yet\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
2005-06-22 04:17:23 +04:00
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
mddev->degraded = 0;
|
|
|
|
for (i=0; i < conf->raid_disks; i++)
|
|
|
|
if (conf->mirrors[i].rdev == NULL ||
|
|
|
|
!test_bit(In_sync, &conf->mirrors[i].rdev->flags) ||
|
|
|
|
test_bit(Faulty, &conf->mirrors[i].rdev->flags))
|
|
|
|
mddev->degraded++;
|
|
|
|
|
|
|
|
if (conf->raid_disks - mddev->degraded == 1)
|
|
|
|
mddev->recovery_cp = MaxSector;
|
|
|
|
|
2009-06-18 02:48:06 +04:00
|
|
|
if (mddev->recovery_cp != MaxSector)
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_NOTICE "md/raid1:%s: not clean"
|
2009-06-18 02:48:06 +04:00
|
|
|
" -- starting background reconstruction\n",
|
|
|
|
mdname(mddev));
|
2005-04-17 02:20:36 +04:00
|
|
|
printk(KERN_INFO
|
2010-05-03 08:30:35 +04:00
|
|
|
"md/raid1:%s: active with %d out of %d mirrors\n",
|
2005-04-17 02:20:36 +04:00
|
|
|
mdname(mddev), mddev->raid_disks - mddev->degraded,
|
|
|
|
mddev->raid_disks);
|
2009-12-14 04:49:51 +03:00
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
|
|
|
* Ok, everything is just fine now
|
|
|
|
*/
|
2009-12-14 04:49:51 +03:00
|
|
|
mddev->thread = conf->thread;
|
|
|
|
conf->thread = NULL;
|
|
|
|
mddev->private = conf;
|
|
|
|
|
2009-03-31 07:59:03 +04:00
|
|
|
md_set_array_sectors(mddev, raid1_size(mddev, 0, 0));
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2011-06-08 02:50:35 +04:00
|
|
|
if (mddev->queue) {
|
|
|
|
mddev->queue->backing_dev_info.congested_fn = raid1_congested;
|
|
|
|
mddev->queue->backing_dev_info.congested_data = mddev;
|
|
|
|
}
|
2011-03-17 13:11:05 +03:00
|
|
|
return md_integrity_register(mddev);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int stop(mddev_t *mddev)
|
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-09-10 03:23:47 +04:00
|
|
|
struct bitmap *bitmap = mddev->bitmap;
|
|
|
|
|
|
|
|
/* wait for behind writes to complete */
|
2010-03-31 04:21:44 +04:00
|
|
|
if (bitmap && atomic_read(&bitmap->behind_writes) > 0) {
|
2010-05-03 08:30:35 +04:00
|
|
|
printk(KERN_INFO "md/raid1:%s: behind writes in progress - waiting to stop.\n",
|
|
|
|
mdname(mddev));
|
2005-09-10 03:23:47 +04:00
|
|
|
/* need to kick something here to make sure I/O goes? */
|
2010-03-31 04:21:44 +04:00
|
|
|
wait_event(bitmap->behind_wait,
|
|
|
|
atomic_read(&bitmap->behind_writes) == 0);
|
2005-09-10 03:23:47 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2009-03-31 07:39:39 +04:00
|
|
|
raise_barrier(conf);
|
|
|
|
lower_barrier(conf);
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
md_unregister_thread(mddev->thread);
|
|
|
|
mddev->thread = NULL;
|
|
|
|
if (conf->r1bio_pool)
|
|
|
|
mempool_destroy(conf->r1bio_pool);
|
2005-06-22 04:17:30 +04:00
|
|
|
kfree(conf->mirrors);
|
|
|
|
kfree(conf->poolinfo);
|
2005-04-17 02:20:36 +04:00
|
|
|
kfree(conf);
|
|
|
|
mddev->private = NULL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int raid1_resize(mddev_t *mddev, sector_t sectors)
|
|
|
|
{
|
|
|
|
/* no resync is happening, and there is enough space
|
|
|
|
* on all devices, so we can resize.
|
|
|
|
* We need to make sure resync covers any new space.
|
|
|
|
* If the array is shrinking we should possibly wait until
|
|
|
|
* any io in the removed space completes, but it hardly seems
|
|
|
|
* worth it.
|
|
|
|
*/
|
2009-03-31 07:59:03 +04:00
|
|
|
md_set_array_sectors(mddev, raid1_size(mddev, sectors, 0));
|
2009-03-31 08:00:31 +04:00
|
|
|
if (mddev->array_sectors > raid1_size(mddev, sectors, 0))
|
|
|
|
return -EINVAL;
|
2008-07-21 11:05:22 +04:00
|
|
|
set_capacity(mddev->gendisk, mddev->array_sectors);
|
2009-08-03 04:59:58 +04:00
|
|
|
revalidate_disk(mddev->gendisk);
|
2009-03-31 08:00:31 +04:00
|
|
|
if (sectors > mddev->dev_sectors &&
|
2011-05-11 09:52:21 +04:00
|
|
|
mddev->recovery_cp > mddev->dev_sectors) {
|
2009-03-31 07:33:13 +04:00
|
|
|
mddev->recovery_cp = mddev->dev_sectors;
|
2005-04-17 02:20:36 +04:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
}
|
2009-03-31 08:00:31 +04:00
|
|
|
mddev->dev_sectors = sectors;
|
2005-07-27 22:43:28 +04:00
|
|
|
mddev->resync_max_sectors = sectors;
|
2005-04-17 02:20:36 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-03-27 13:18:13 +04:00
|
|
|
static int raid1_reshape(mddev_t *mddev)
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
/* We need to:
|
|
|
|
* 1/ resize the r1bio_pool
|
|
|
|
* 2/ resize conf->mirrors
|
|
|
|
*
|
|
|
|
* We allocate a new r1bio_pool if we can.
|
|
|
|
* Then raise a device barrier and wait until all IO stops.
|
|
|
|
* Then resize conf->mirrors and swap in the new r1bio pool.
|
2005-06-22 04:17:09 +04:00
|
|
|
*
|
|
|
|
* At the same time, we "pack" the devices so that all the missing
|
|
|
|
* devices have the higher raid_disk numbers.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
mempool_t *newpool, *oldpool;
|
|
|
|
struct pool_info *newpoolinfo;
|
|
|
|
mirror_info_t *newmirrors;
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2006-03-27 13:18:13 +04:00
|
|
|
int cnt, raid_disks;
|
2006-10-03 12:15:53 +04:00
|
|
|
unsigned long flags;
|
2008-06-28 08:44:04 +04:00
|
|
|
int d, d2, err;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-03-27 13:18:13 +04:00
|
|
|
/* Cannot change chunk_size, layout, or level */
|
2009-06-18 02:45:27 +04:00
|
|
|
if (mddev->chunk_sectors != mddev->new_chunk_sectors ||
|
2006-03-27 13:18:13 +04:00
|
|
|
mddev->layout != mddev->new_layout ||
|
|
|
|
mddev->level != mddev->new_level) {
|
2009-06-18 02:45:27 +04:00
|
|
|
mddev->new_chunk_sectors = mddev->chunk_sectors;
|
2006-03-27 13:18:13 +04:00
|
|
|
mddev->new_layout = mddev->layout;
|
|
|
|
mddev->new_level = mddev->level;
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2008-06-28 08:44:04 +04:00
|
|
|
err = md_allow_write(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
2007-01-26 11:57:11 +03:00
|
|
|
|
2006-03-27 13:18:13 +04:00
|
|
|
raid_disks = mddev->raid_disks + mddev->delta_disks;
|
|
|
|
|
2005-06-22 04:17:09 +04:00
|
|
|
if (raid_disks < conf->raid_disks) {
|
|
|
|
cnt=0;
|
|
|
|
for (d= 0; d < conf->raid_disks; d++)
|
|
|
|
if (conf->mirrors[d].rdev)
|
|
|
|
cnt++;
|
|
|
|
if (cnt > raid_disks)
|
2005-04-17 02:20:36 +04:00
|
|
|
return -EBUSY;
|
2005-06-22 04:17:09 +04:00
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
newpoolinfo = kmalloc(sizeof(*newpoolinfo), GFP_KERNEL);
|
|
|
|
if (!newpoolinfo)
|
|
|
|
return -ENOMEM;
|
|
|
|
newpoolinfo->mddev = mddev;
|
|
|
|
newpoolinfo->raid_disks = raid_disks;
|
|
|
|
|
|
|
|
newpool = mempool_create(NR_RAID1_BIOS, r1bio_pool_alloc,
|
|
|
|
r1bio_pool_free, newpoolinfo);
|
|
|
|
if (!newpool) {
|
|
|
|
kfree(newpoolinfo);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2006-01-06 11:20:32 +03:00
|
|
|
newmirrors = kzalloc(sizeof(struct mirror_info) * raid_disks, GFP_KERNEL);
|
2005-04-17 02:20:36 +04:00
|
|
|
if (!newmirrors) {
|
|
|
|
kfree(newpoolinfo);
|
|
|
|
mempool_destroy(newpool);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2006-01-06 11:20:12 +03:00
|
|
|
raise_barrier(conf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
/* ok, everything is stopped */
|
|
|
|
oldpool = conf->r1bio_pool;
|
|
|
|
conf->r1bio_pool = newpool;
|
2005-06-22 04:17:09 +04:00
|
|
|
|
2007-08-23 01:01:53 +04:00
|
|
|
for (d = d2 = 0; d < conf->raid_disks; d++) {
|
|
|
|
mdk_rdev_t *rdev = conf->mirrors[d].rdev;
|
|
|
|
if (rdev && rdev->raid_disk != d2) {
|
2011-07-27 05:00:36 +04:00
|
|
|
sysfs_unlink_rdev(mddev, rdev);
|
2007-08-23 01:01:53 +04:00
|
|
|
rdev->raid_disk = d2;
|
2011-07-27 05:00:36 +04:00
|
|
|
sysfs_unlink_rdev(mddev, rdev);
|
|
|
|
if (sysfs_link_rdev(mddev, rdev))
|
2007-08-23 01:01:53 +04:00
|
|
|
printk(KERN_WARNING
|
2011-07-27 05:00:36 +04:00
|
|
|
"md/raid1:%s: cannot register rd%d\n",
|
|
|
|
mdname(mddev), rdev->raid_disk);
|
2005-06-22 04:17:09 +04:00
|
|
|
}
|
2007-08-23 01:01:53 +04:00
|
|
|
if (rdev)
|
|
|
|
newmirrors[d2++].rdev = rdev;
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
kfree(conf->mirrors);
|
|
|
|
conf->mirrors = newmirrors;
|
|
|
|
kfree(conf->poolinfo);
|
|
|
|
conf->poolinfo = newpoolinfo;
|
|
|
|
|
2006-10-03 12:15:53 +04:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
2005-04-17 02:20:36 +04:00
|
|
|
mddev->degraded += (raid_disks - conf->raid_disks);
|
2006-10-03 12:15:53 +04:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2005-04-17 02:20:36 +04:00
|
|
|
conf->raid_disks = mddev->raid_disks = raid_disks;
|
2006-03-27 13:18:13 +04:00
|
|
|
mddev->delta_disks = 0;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2005-06-22 04:17:09 +04:00
|
|
|
conf->last_used = 0; /* just make sure it is in-range */
|
2006-01-06 11:20:12 +03:00
|
|
|
lower_barrier(conf);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
|
|
|
|
mempool_destroy(oldpool);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-09-10 03:23:58 +04:00
|
|
|
static void raid1_quiesce(mddev_t *mddev, int state)
|
2005-09-10 03:23:45 +04:00
|
|
|
{
|
2009-06-16 10:54:21 +04:00
|
|
|
conf_t *conf = mddev->private;
|
2005-09-10 03:23:45 +04:00
|
|
|
|
|
|
|
switch(state) {
|
2009-12-14 04:49:51 +03:00
|
|
|
case 2: /* wake for suspend */
|
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
break;
|
2005-09-10 03:23:48 +04:00
|
|
|
case 1:
|
2006-01-06 11:20:12 +03:00
|
|
|
raise_barrier(conf);
|
2005-09-10 03:23:45 +04:00
|
|
|
break;
|
2005-09-10 03:23:48 +04:00
|
|
|
case 0:
|
2006-01-06 11:20:12 +03:00
|
|
|
lower_barrier(conf);
|
2005-09-10 03:23:45 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-12-14 04:49:51 +03:00
|
|
|
static void *raid1_takeover(mddev_t *mddev)
|
|
|
|
{
|
|
|
|
/* raid1 can take over:
|
|
|
|
* raid5 with 2 devices, any layout or chunk size
|
|
|
|
*/
|
|
|
|
if (mddev->level == 5 && mddev->raid_disks == 2) {
|
|
|
|
conf_t *conf;
|
|
|
|
mddev->new_level = 1;
|
|
|
|
mddev->new_layout = 0;
|
|
|
|
mddev->new_chunk_sectors = 0;
|
|
|
|
conf = setup_conf(mddev);
|
|
|
|
if (!IS_ERR(conf))
|
|
|
|
conf->barrier = 1;
|
|
|
|
return conf;
|
|
|
|
}
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
}
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2006-01-06 11:20:36 +03:00
|
|
|
static struct mdk_personality raid1_personality =
|
2005-04-17 02:20:36 +04:00
|
|
|
{
|
|
|
|
.name = "raid1",
|
2006-01-06 11:20:36 +03:00
|
|
|
.level = 1,
|
2005-04-17 02:20:36 +04:00
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.make_request = make_request,
|
|
|
|
.run = run,
|
|
|
|
.stop = stop,
|
|
|
|
.status = status,
|
|
|
|
.error_handler = error,
|
|
|
|
.hot_add_disk = raid1_add_disk,
|
|
|
|
.hot_remove_disk= raid1_remove_disk,
|
|
|
|
.spare_active = raid1_spare_active,
|
|
|
|
.sync_request = sync_request,
|
|
|
|
.resize = raid1_resize,
|
2009-03-18 04:10:40 +03:00
|
|
|
.size = raid1_size,
|
2006-03-27 13:18:13 +04:00
|
|
|
.check_reshape = raid1_reshape,
|
2005-09-10 03:23:45 +04:00
|
|
|
.quiesce = raid1_quiesce,
|
2009-12-14 04:49:51 +03:00
|
|
|
.takeover = raid1_takeover,
|
2005-04-17 02:20:36 +04:00
|
|
|
};
|
|
|
|
|
|
|
|
static int __init raid_init(void)
|
|
|
|
{
|
2006-01-06 11:20:36 +03:00
|
|
|
return register_md_personality(&raid1_personality);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void raid_exit(void)
|
|
|
|
{
|
2006-01-06 11:20:36 +03:00
|
|
|
unregister_md_personality(&raid1_personality);
|
2005-04-17 02:20:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
module_init(raid_init);
|
|
|
|
module_exit(raid_exit);
|
|
|
|
MODULE_LICENSE("GPL");
|
2009-12-14 04:49:58 +03:00
|
|
|
MODULE_DESCRIPTION("RAID1 (mirroring) personality for MD");
|
2005-04-17 02:20:36 +04:00
|
|
|
MODULE_ALIAS("md-personality-3"); /* RAID1 */
|
2006-01-06 11:20:51 +03:00
|
|
|
MODULE_ALIAS("md-raid1");
|
2006-01-06 11:20:36 +03:00
|
|
|
MODULE_ALIAS("md-level-1");
|