dm raid: fix restoring of failed devices regression

'lvchange --refresh RaidLV' causes a mapped device suspend/resume cycle
aiming at device restore and resync after transient device failures.  This
failed because flag RT_FLAG_RS_RESUMED was always cleared in the suspend path,
thus the device restore wasn't performed in the resume path.

Solve by removing RT_FLAG_RS_RESUMED from the suspend path and resume
unconditionally.  Also, remove superfluous comment from raid_resume().

Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
This commit is contained in:
Heinz Mauelshagen 2016-08-10 02:45:59 +02:00 коммит произвёл Mike Snitzer
Родитель a4423287ec
Коммит 31e10a4120
1 изменённых файлов: 12 добавлений и 23 удалений

Просмотреть файл

@ -3382,11 +3382,10 @@ static void raid_postsuspend(struct dm_target *ti)
{
struct raid_set *rs = ti->private;
if (test_and_clear_bit(RT_FLAG_RS_RESUMED, &rs->runtime_flags)) {
if (!rs->md.suspended)
mddev_suspend(&rs->md);
rs->md.ro = 1;
}
if (!rs->md.suspended)
mddev_suspend(&rs->md);
rs->md.ro = 1;
}
static void attempt_restore_of_faulty_devices(struct raid_set *rs)
@ -3606,25 +3605,15 @@ static void raid_resume(struct dm_target *ti)
* devices are reachable again.
*/
attempt_restore_of_faulty_devices(rs);
} else {
mddev->ro = 0;
mddev->in_sync = 0;
/*
* When passing in flags to the ctr, we expect userspace
* to reset them because they made it to the superblocks
* and reload the mapping anyway.
*
* -> only unfreeze recovery in case of a table reload or
* we'll have a bogus recovery/reshape position
* retrieved from the superblock by the ctr because
* the ongoing recovery/reshape will change it after read.
*/
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
if (mddev->suspended)
mddev_resume(mddev);
}
mddev->ro = 0;
mddev->in_sync = 0;
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
if (mddev->suspended)
mddev_resume(mddev);
}
static struct target_type raid_target = {