2018-06-06 05:42:14 +03:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
2005-11-02 06:58:39 +03:00
|
|
|
* Copyright (c) 2000,2002,2005 Silicon Graphics, Inc.
|
|
|
|
* All Rights Reserved.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
|
|
|
#ifndef __XFS_TRANS_PRIV_H__
|
|
|
|
#define __XFS_TRANS_PRIV_H__
|
|
|
|
|
|
|
|
struct xfs_log_item;
|
|
|
|
struct xfs_mount;
|
|
|
|
struct xfs_trans;
|
2010-12-20 04:02:19 +03:00
|
|
|
struct xfs_ail;
|
|
|
|
struct xfs_log_vec;
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2013-08-12 14:49:31 +04:00
|
|
|
|
|
|
|
void xfs_trans_init(struct xfs_mount *);
|
2010-06-23 12:11:15 +04:00
|
|
|
void xfs_trans_add_item(struct xfs_trans *, struct xfs_log_item *);
|
|
|
|
void xfs_trans_del_item(struct xfs_log_item *);
|
xfs: Introduce delayed logging core code
The delayed logging code only changes in-memory structures and as
such can be enabled and disabled with a mount option. Add the mount
option and emit a warning that this is an experimental feature that
should not be used in production yet.
We also need infrastructure to track committed items that have not
yet been written to the log. This is what the Committed Item List
(CIL) is for.
The log item also needs to be extended to track the current log
vector, the associated memory buffer and it's location in the Commit
Item List. Extend the log item and log vector structures to enable
this tracking.
To maintain the current log format for transactions with delayed
logging, we need to introduce a checkpoint transaction and a context
for tracking each checkpoint from initiation to transaction
completion. This includes adding a log ticket for tracking space
log required/used by the context checkpoint.
To track all the changes we need an io vector array per log item,
rather than a single array for the entire transaction. Using the new
log vector structure for this requires two passes - the first to
allocate the log vector structures and chain them together, and the
second to fill them out. This log vector chain can then be passed
to the CIL for formatting, pinning and insertion into the CIL.
Formatting of the log vector chain is relatively simple - it's just
a loop over the iovecs on each log vector, but it is made slightly
more complex because we re-write the iovec after the copy to point
back at the memory buffer we just copied into.
This code also needs to pin log items. If the log item is not
already tracked in this checkpoint context, then it needs to be
pinned. Otherwise it is already pinned and we don't need to pin it
again.
The only other complexity is calculating the amount of new log space
the formatting has consumed. This needs to be accounted to the
transaction in progress, and the accounting is made more complex
becase we need also to steal space from it for log metadata in the
checkpoint transaction. Calculate all this at insert time and update
all the tickets, counters, etc correctly.
Once we've formatted all the log items in the transaction, attach
the busy extents to the checkpoint context so the busy extents live
until checkpoint completion and can be processed at that point in
time. Transactions can then be freed at this point in time.
Now we need to issue checkpoints - we are tracking the amount of log space
used by the items in the CIL, so we can trigger background checkpoints when the
space usage gets to a certain threshold. Otherwise, checkpoints need ot be
triggered when a log synchronisation point is reached - a log force event.
Because the log write code already handles chained log vectors, writing the
transaction is trivial, too. Construct a transaction header, add it
to the head of the chain and write it into the log, then issue a
commit record write. Then we can release the checkpoint log ticket
and attach the context to the log buffer so it can be called during
Io completion to complete the checkpoint.
We also need to allow for synchronising multiple in-flight
checkpoints. This is needed for two things - the first is to ensure
that checkpoint commit records appear in the log in the correct
sequence order (so they are replayed in the correct order). The
second is so that xfs_log_force_lsn() operates correctly and only
flushes and/or waits for the specific sequence it was provided with.
To do this we need a wait variable and a list tracking the
checkpoint commits in progress. We can walk this list and wait for
the checkpoints to change state or complete easily, an this provides
the necessary synchronisation for correct operation in both cases.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
2010-05-21 08:37:18 +04:00
|
|
|
void xfs_trans_unreserve_and_mod_sb(struct xfs_trans *tp);
|
2005-04-17 02:20:36 +04:00
|
|
|
|
2010-12-20 04:02:19 +03:00
|
|
|
void xfs_trans_committed_bulk(struct xfs_ail *ailp, struct xfs_log_vec *lv,
|
2019-06-29 05:27:30 +03:00
|
|
|
xfs_lsn_t commit_lsn, bool aborted);
|
2005-04-17 02:20:36 +04:00
|
|
|
/*
|
2008-10-30 09:38:39 +03:00
|
|
|
* AIL traversal cursor.
|
|
|
|
*
|
|
|
|
* Rather than using a generation number for detecting changes in the ail, use
|
|
|
|
* a cursor that is protected by the ail lock. The aild cursor exists in the
|
|
|
|
* struct xfs_ail, but other traversals can declare it on the stack and link it
|
|
|
|
* to the ail list.
|
|
|
|
*
|
|
|
|
* When an object is deleted from or moved int the AIL, the cursor list is
|
|
|
|
* searched to see if the object is a designated cursor item. If it is, it is
|
|
|
|
* deleted from the cursor so that the next time the cursor is used traversal
|
|
|
|
* will return to the start.
|
|
|
|
*
|
|
|
|
* This means a traversal colliding with a removal will cause a restart of the
|
|
|
|
* list scan, rather than any insertion or deletion anywhere in the list. The
|
|
|
|
* low bit of the item pointer is set if the cursor has been invalidated so
|
|
|
|
* that we can tell the difference between invalidation and reaching the end
|
|
|
|
* of the list to trigger traversal restarts.
|
2005-04-17 02:20:36 +04:00
|
|
|
*/
|
2008-10-30 09:38:39 +03:00
|
|
|
struct xfs_ail_cursor {
|
2011-07-18 07:40:18 +04:00
|
|
|
struct list_head list;
|
2008-10-30 09:38:39 +03:00
|
|
|
struct xfs_log_item *item;
|
|
|
|
};
|
2005-04-17 02:20:36 +04:00
|
|
|
|
[XFS] Move AIL pushing into it's own thread
When many hundreds to thousands of threads all try to do simultaneous
transactions and the log is in a tail-pushing situation (i.e. full), we
can get multiple threads walking the AIL list and contending on the AIL
lock.
The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
the ordering constraints placed on it by the transaction subsystem. It
really does not need multiple threads to push on it - even when only a
single CPU is pushing the AIL, it can push the I/O out far faster that
pretty much any disk subsystem can handle.
So, to avoid contention problems stemming from multiple list walkers, move
the list walk off into another thread and simply provide a "target" to
push to. When a thread requires a push, it sets the target and wakes the
push thread, then goes to sleep waiting for the required amount of space
to become available in the log.
This mechanism should also be a lot fairer under heavy load as the waiters
will queue in arrival order, rather than queuing in "who completed a push
first" order.
Also, by moving the pushing to a separate thread we can do more
effectively overload detection and prevention as we can keep context from
loop iteration to loop iteration. That is, we can push only part of the
list each loop and not have to loop back to the start of the list every
time we run. This should also help by reducing the number of items we try
to lock and/or push items that we cannot move.
Note that this patch is not intended to solve the inefficiencies in the
AIL structure and the associated issues with extremely large list
contents. That needs to be addresses separately; parallel access would
cause problems to any new structure as well, so I'm only aiming to isolate
the structure from unbounded parallelism here.
SGI-PV: 972759
SGI-Modid: xfs-linux-melb:xfs-kern:30371a
Signed-off-by: David Chinner <dgc@sgi.com>
Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
2008-02-05 04:13:32 +03:00
|
|
|
/*
|
2008-10-30 09:38:39 +03:00
|
|
|
* Private AIL structures.
|
|
|
|
*
|
|
|
|
* Eventually we need to drive the locking in here as well.
|
[XFS] Move AIL pushing into it's own thread
When many hundreds to thousands of threads all try to do simultaneous
transactions and the log is in a tail-pushing situation (i.e. full), we
can get multiple threads walking the AIL list and contending on the AIL
lock.
The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
the ordering constraints placed on it by the transaction subsystem. It
really does not need multiple threads to push on it - even when only a
single CPU is pushing the AIL, it can push the I/O out far faster that
pretty much any disk subsystem can handle.
So, to avoid contention problems stemming from multiple list walkers, move
the list walk off into another thread and simply provide a "target" to
push to. When a thread requires a push, it sets the target and wakes the
push thread, then goes to sleep waiting for the required amount of space
to become available in the log.
This mechanism should also be a lot fairer under heavy load as the waiters
will queue in arrival order, rather than queuing in "who completed a push
first" order.
Also, by moving the pushing to a separate thread we can do more
effectively overload detection and prevention as we can keep context from
loop iteration to loop iteration. That is, we can push only part of the
list each loop and not have to loop back to the start of the list every
time we run. This should also help by reducing the number of items we try
to lock and/or push items that we cannot move.
Note that this patch is not intended to solve the inefficiencies in the
AIL structure and the associated issues with extremely large list
contents. That needs to be addresses separately; parallel access would
cause problems to any new structure as well, so I'm only aiming to isolate
the structure from unbounded parallelism here.
SGI-PV: 972759
SGI-Modid: xfs-linux-melb:xfs-kern:30371a
Signed-off-by: David Chinner <dgc@sgi.com>
Signed-off-by: Lachlan McIlroy <lachlan@sgi.com>
2008-02-05 04:13:32 +03:00
|
|
|
*/
|
2008-10-30 09:38:26 +03:00
|
|
|
struct xfs_ail {
|
2018-03-08 01:59:39 +03:00
|
|
|
struct xfs_mount *ail_mount;
|
|
|
|
struct task_struct *ail_task;
|
|
|
|
struct list_head ail_head;
|
|
|
|
xfs_lsn_t ail_target;
|
|
|
|
xfs_lsn_t ail_target_prev;
|
|
|
|
struct list_head ail_cursors;
|
|
|
|
spinlock_t ail_lock;
|
|
|
|
xfs_lsn_t ail_last_pushed_lsn;
|
|
|
|
int ail_log_flush;
|
|
|
|
struct list_head ail_buf_list;
|
|
|
|
wait_queue_head_t ail_empty;
|
2008-10-30 09:38:26 +03:00
|
|
|
};
|
|
|
|
|
2008-10-30 09:38:39 +03:00
|
|
|
/*
|
|
|
|
* From xfs_trans_ail.c
|
|
|
|
*/
|
2010-12-20 04:34:26 +03:00
|
|
|
void xfs_trans_ail_update_bulk(struct xfs_ail *ailp,
|
2011-07-18 07:40:16 +04:00
|
|
|
struct xfs_ail_cursor *cur,
|
2010-12-20 04:34:26 +03:00
|
|
|
struct xfs_log_item **log_items, int nr_items,
|
2018-03-08 01:59:39 +03:00
|
|
|
xfs_lsn_t lsn) __releases(ailp->ail_lock);
|
2013-08-15 09:08:35 +04:00
|
|
|
/*
|
|
|
|
* Return a pointer to the first item in the AIL. If the AIL is empty, then
|
|
|
|
* return NULL.
|
|
|
|
*/
|
|
|
|
static inline struct xfs_log_item *
|
|
|
|
xfs_ail_min(
|
|
|
|
struct xfs_ail *ailp)
|
|
|
|
{
|
2018-03-08 01:59:39 +03:00
|
|
|
return list_first_entry_or_null(&ailp->ail_head, struct xfs_log_item,
|
2013-08-15 09:08:35 +04:00
|
|
|
li_ail);
|
|
|
|
}
|
|
|
|
|
2010-12-20 04:34:26 +03:00
|
|
|
static inline void
|
|
|
|
xfs_trans_ail_update(
|
|
|
|
struct xfs_ail *ailp,
|
|
|
|
struct xfs_log_item *lip,
|
2018-03-08 01:59:39 +03:00
|
|
|
xfs_lsn_t lsn) __releases(ailp->ail_lock)
|
2010-12-20 04:34:26 +03:00
|
|
|
{
|
2011-07-18 07:40:16 +04:00
|
|
|
xfs_trans_ail_update_bulk(ailp, NULL, &lip, 1, lsn);
|
2010-12-20 04:34:26 +03:00
|
|
|
}
|
|
|
|
|
2020-03-25 06:10:29 +03:00
|
|
|
xfs_lsn_t xfs_ail_delete_one(struct xfs_ail *ailp, struct xfs_log_item *lip);
|
|
|
|
void xfs_ail_update_finish(struct xfs_ail *ailp, xfs_lsn_t old_lsn)
|
2020-03-25 06:10:29 +03:00
|
|
|
__releases(ailp->ail_lock);
|
2017-04-21 21:24:42 +03:00
|
|
|
void xfs_trans_ail_delete(struct xfs_ail *ailp, struct xfs_log_item *lip,
|
2020-03-25 06:10:29 +03:00
|
|
|
int shutdown_type);
|
2010-12-20 04:36:15 +03:00
|
|
|
|
2015-08-19 03:01:08 +03:00
|
|
|
static inline void
|
|
|
|
xfs_trans_ail_remove(
|
|
|
|
struct xfs_log_item *lip,
|
|
|
|
int shutdown_type)
|
|
|
|
{
|
|
|
|
struct xfs_ail *ailp = lip->li_ailp;
|
|
|
|
|
2018-03-08 01:59:39 +03:00
|
|
|
spin_lock(&ailp->ail_lock);
|
2015-08-19 03:01:08 +03:00
|
|
|
/* xfs_trans_ail_delete() drops the AIL lock */
|
2018-05-09 17:47:34 +03:00
|
|
|
if (test_bit(XFS_LI_IN_AIL, &lip->li_flags))
|
2015-08-19 03:01:08 +03:00
|
|
|
xfs_trans_ail_delete(ailp, lip, shutdown_type);
|
|
|
|
else
|
2018-03-08 01:59:39 +03:00
|
|
|
spin_unlock(&ailp->ail_lock);
|
2015-08-19 03:01:08 +03:00
|
|
|
}
|
|
|
|
|
2011-04-08 06:45:07 +04:00
|
|
|
void xfs_ail_push(struct xfs_ail *, xfs_lsn_t);
|
|
|
|
void xfs_ail_push_all(struct xfs_ail *);
|
2012-04-23 09:58:34 +04:00
|
|
|
void xfs_ail_push_all_sync(struct xfs_ail *);
|
2012-04-23 09:58:33 +04:00
|
|
|
struct xfs_log_item *xfs_ail_min(struct xfs_ail *ailp);
|
2011-04-08 06:45:07 +04:00
|
|
|
xfs_lsn_t xfs_ail_min_lsn(struct xfs_ail *ailp);
|
|
|
|
|
2011-07-18 07:40:16 +04:00
|
|
|
struct xfs_log_item * xfs_trans_ail_cursor_first(struct xfs_ail *ailp,
|
2008-10-30 09:39:00 +03:00
|
|
|
struct xfs_ail_cursor *cur,
|
|
|
|
xfs_lsn_t lsn);
|
2011-07-18 07:40:16 +04:00
|
|
|
struct xfs_log_item * xfs_trans_ail_cursor_last(struct xfs_ail *ailp,
|
|
|
|
struct xfs_ail_cursor *cur,
|
|
|
|
xfs_lsn_t lsn);
|
|
|
|
struct xfs_log_item * xfs_trans_ail_cursor_next(struct xfs_ail *ailp,
|
2008-10-30 09:38:39 +03:00
|
|
|
struct xfs_ail_cursor *cur);
|
2014-04-14 13:06:05 +04:00
|
|
|
void xfs_trans_ail_cursor_done(struct xfs_ail_cursor *cur);
|
2008-10-30 09:38:39 +03:00
|
|
|
|
2008-10-30 09:39:12 +03:00
|
|
|
#if BITS_PER_LONG != 64
|
|
|
|
static inline void
|
|
|
|
xfs_trans_ail_copy_lsn(
|
|
|
|
struct xfs_ail *ailp,
|
|
|
|
xfs_lsn_t *dst,
|
|
|
|
xfs_lsn_t *src)
|
|
|
|
{
|
|
|
|
ASSERT(sizeof(xfs_lsn_t) == 8); /* don't lock if it shrinks */
|
2018-03-08 01:59:39 +03:00
|
|
|
spin_lock(&ailp->ail_lock);
|
2008-10-30 09:39:12 +03:00
|
|
|
*dst = *src;
|
2018-03-08 01:59:39 +03:00
|
|
|
spin_unlock(&ailp->ail_lock);
|
2008-10-30 09:39:12 +03:00
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void
|
|
|
|
xfs_trans_ail_copy_lsn(
|
|
|
|
struct xfs_ail *ailp,
|
|
|
|
xfs_lsn_t *dst,
|
|
|
|
xfs_lsn_t *src)
|
|
|
|
{
|
|
|
|
ASSERT(sizeof(xfs_lsn_t) == 8);
|
|
|
|
*dst = *src;
|
|
|
|
}
|
|
|
|
#endif
|
xfs: Properly retry failed inode items in case of error during buffer writeback
When a buffer has been failed during writeback, the inode items into it
are kept flush locked, and are never resubmitted due the flush lock, so,
if any buffer fails to be written, the items in AIL are never written to
disk and never unlocked.
This causes unmount operation to hang due these items flush locked in AIL,
but this also causes the items in AIL to never be written back, even when
the IO device comes back to normal.
I've been testing this patch with a DM-thin device, creating a
filesystem larger than the real device.
When writing enough data to fill the DM-thin device, XFS receives ENOSPC
errors from the device, and keep spinning on xfsaild (when 'retry
forever' configuration is set).
At this point, the filesystem can not be unmounted because of the flush locked
items in AIL, but worse, the items in AIL are never retried at all
(once xfs_inode_item_push() will skip the items that are flush locked),
even if the underlying DM-thin device is expanded to the proper size.
This patch fixes both cases, retrying any item that has been failed
previously, using the infra-structure provided by the previous patch.
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-08-09 04:21:50 +03:00
|
|
|
|
|
|
|
static inline void
|
|
|
|
xfs_clear_li_failed(
|
|
|
|
struct xfs_log_item *lip)
|
|
|
|
{
|
|
|
|
struct xfs_buf *bp = lip->li_buf;
|
|
|
|
|
2018-05-09 17:47:34 +03:00
|
|
|
ASSERT(test_bit(XFS_LI_IN_AIL, &lip->li_flags));
|
2018-03-08 01:59:39 +03:00
|
|
|
lockdep_assert_held(&lip->li_ailp->ail_lock);
|
xfs: Properly retry failed inode items in case of error during buffer writeback
When a buffer has been failed during writeback, the inode items into it
are kept flush locked, and are never resubmitted due the flush lock, so,
if any buffer fails to be written, the items in AIL are never written to
disk and never unlocked.
This causes unmount operation to hang due these items flush locked in AIL,
but this also causes the items in AIL to never be written back, even when
the IO device comes back to normal.
I've been testing this patch with a DM-thin device, creating a
filesystem larger than the real device.
When writing enough data to fill the DM-thin device, XFS receives ENOSPC
errors from the device, and keep spinning on xfsaild (when 'retry
forever' configuration is set).
At this point, the filesystem can not be unmounted because of the flush locked
items in AIL, but worse, the items in AIL are never retried at all
(once xfs_inode_item_push() will skip the items that are flush locked),
even if the underlying DM-thin device is expanded to the proper size.
This patch fixes both cases, retrying any item that has been failed
previously, using the infra-structure provided by the previous patch.
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-08-09 04:21:50 +03:00
|
|
|
|
2018-05-09 17:47:34 +03:00
|
|
|
if (test_and_clear_bit(XFS_LI_FAILED, &lip->li_flags)) {
|
xfs: Properly retry failed inode items in case of error during buffer writeback
When a buffer has been failed during writeback, the inode items into it
are kept flush locked, and are never resubmitted due the flush lock, so,
if any buffer fails to be written, the items in AIL are never written to
disk and never unlocked.
This causes unmount operation to hang due these items flush locked in AIL,
but this also causes the items in AIL to never be written back, even when
the IO device comes back to normal.
I've been testing this patch with a DM-thin device, creating a
filesystem larger than the real device.
When writing enough data to fill the DM-thin device, XFS receives ENOSPC
errors from the device, and keep spinning on xfsaild (when 'retry
forever' configuration is set).
At this point, the filesystem can not be unmounted because of the flush locked
items in AIL, but worse, the items in AIL are never retried at all
(once xfs_inode_item_push() will skip the items that are flush locked),
even if the underlying DM-thin device is expanded to the proper size.
This patch fixes both cases, retrying any item that has been failed
previously, using the infra-structure provided by the previous patch.
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-08-09 04:21:50 +03:00
|
|
|
lip->li_buf = NULL;
|
|
|
|
xfs_buf_rele(bp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
xfs_set_li_failed(
|
|
|
|
struct xfs_log_item *lip,
|
|
|
|
struct xfs_buf *bp)
|
|
|
|
{
|
2018-03-08 01:59:39 +03:00
|
|
|
lockdep_assert_held(&lip->li_ailp->ail_lock);
|
xfs: Properly retry failed inode items in case of error during buffer writeback
When a buffer has been failed during writeback, the inode items into it
are kept flush locked, and are never resubmitted due the flush lock, so,
if any buffer fails to be written, the items in AIL are never written to
disk and never unlocked.
This causes unmount operation to hang due these items flush locked in AIL,
but this also causes the items in AIL to never be written back, even when
the IO device comes back to normal.
I've been testing this patch with a DM-thin device, creating a
filesystem larger than the real device.
When writing enough data to fill the DM-thin device, XFS receives ENOSPC
errors from the device, and keep spinning on xfsaild (when 'retry
forever' configuration is set).
At this point, the filesystem can not be unmounted because of the flush locked
items in AIL, but worse, the items in AIL are never retried at all
(once xfs_inode_item_push() will skip the items that are flush locked),
even if the underlying DM-thin device is expanded to the proper size.
This patch fixes both cases, retrying any item that has been failed
previously, using the infra-structure provided by the previous patch.
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-08-09 04:21:50 +03:00
|
|
|
|
2018-05-09 17:47:34 +03:00
|
|
|
if (!test_and_set_bit(XFS_LI_FAILED, &lip->li_flags)) {
|
xfs: Properly retry failed inode items in case of error during buffer writeback
When a buffer has been failed during writeback, the inode items into it
are kept flush locked, and are never resubmitted due the flush lock, so,
if any buffer fails to be written, the items in AIL are never written to
disk and never unlocked.
This causes unmount operation to hang due these items flush locked in AIL,
but this also causes the items in AIL to never be written back, even when
the IO device comes back to normal.
I've been testing this patch with a DM-thin device, creating a
filesystem larger than the real device.
When writing enough data to fill the DM-thin device, XFS receives ENOSPC
errors from the device, and keep spinning on xfsaild (when 'retry
forever' configuration is set).
At this point, the filesystem can not be unmounted because of the flush locked
items in AIL, but worse, the items in AIL are never retried at all
(once xfs_inode_item_push() will skip the items that are flush locked),
even if the underlying DM-thin device is expanded to the proper size.
This patch fixes both cases, retrying any item that has been failed
previously, using the infra-structure provided by the previous patch.
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-08-09 04:21:50 +03:00
|
|
|
xfs_buf_hold(bp);
|
|
|
|
lip->li_buf = bp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-17 02:20:36 +04:00
|
|
|
#endif /* __XFS_TRANS_PRIV_H__ */
|