- Use extent maps to track pagecache page status instead of bufferhead
   state.
 - Refactor pagecache read and write paths to use the new iomap library
   functions, which enable us to drop the old bufferhead code for
   pagesize == blocksize filesystems.
 - Set up parallel per-block-per-page metadata to track subpage
   information that was tracked by buffer heads, which enables us to drop
   the old bufferhead code for pagesize > blocksize filesystems.
 - Tie a deferred ops control structure to a transaction so that we can
   take advantage of an upper-level dfops without having to plumb pointer
   passing through the code.
 - Refactor the deferred ops code to track deferred ops as part of the
   transaction structure (instead of as a separate data structure) so
   that we can simplify the scoping rules around defer_ops.
 - Refactor twisty delwri buffer submission code to avoid deadlocks.
 - Shorten and fix indenting problems in the scrub code.
 - Detect obviously bad summary counts at mount and fix them.
 - Directly associate deferred ops control structure with a transaction
   so that callers no longer have to manage it themselves.
 - Remove a couple of IRIX-era inode macros.
 - Remove the long-deprecated 'barrier' and 'nobarrier' mount options.
 - Clean up the inode fork structure a bit.
 - Check for bad fs summary counter values in the superblock.
 - Reduce COW fork lookups during writeback.
 - Refactor the deferred ops control structures into the transaction
   structure, thereby eliminating the need for transaction users to
   handle the deferred ops as a separate data structure.
 - Add the ability to repair AG headers online.
 - Fix a crash due to insufficient return value checking.
 - Various fixes and cleanups.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEUzaAxoMeQq6m2jMV+H93GTRKtOsFAltwVGoACgkQ+H93GTRK
 tOt3bw//WaG1mR44Oo/mhf27MaEJK74LXeViqCH4Sdk10gujClTVnl6h33ChAyEi
 7BT4x1JtwM6xOh7nPsXQy/besVxWadjQcTtAz/3U2wJoFyOX2+I27SAawrmX6jfR
 Hi1DxXFFK7z/8YvuZqYl3vTgxMNb7bLAUybe2sYX8q+vrQaUvl9eLQlHSaT3sxrc
 /lBkog1dYmbw3yjLnWYpQtC0I6Pa3ZuG/S2vpeJ2H5MADtzrRNjuC9MHZJW7tIGm
 +rCLm0agk8yFkEA84VvS5Afee3TppY/JBaYlsvG1rp3bs0fELAJFnzS4g/QDbbsX
 HAKPcMICJksF4C9y0Xb7wXPz/4PKur5/OSuGXN4QtOivOEoAdWfh2PLInqAjo/Le
 mO92PdkBucfVqJzfEC2q2QAnGIaJlG8txhAz87wZ1YfZDQQlJDy385Z9GQXfUpy5
 /1xH7V0cze1ZBSxWSddSFg0gCtaWSerfp0CmAG3A+HWKIN6c/ZNSCrqdq0DBC99D
 qOn6ThjckZWGvz/KV5xBr/KvUYOpSeEyREtgcAN008TiUaNy4nOhWV2xgLGuPY/J
 ed4V2B9qVbq+l+sZyzukB8cmOXmcCey6omwJ7LqZzoTWTAtTQtM2MwhaQFUWtQG8
 mCqPXJp1XyL24sn0bI1t2NuKgQcs6QEQWX3zN4DA6I+N9+sTDqo=
 =2G+i
 -----END PGP SIGNATURE-----

Merge tag 'xfs-4.19-merge-6' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux

Pull xfs updates from Darrick Wong:
 "This is the second part of the XFS changes for 4.19.

  The biggest changes are the removal of buffer heads frm XFS, a massive
  reworking of the deferred transaction operations handling code, the
  removal of the long defunct barrier/nobarrier mount options, and the
  addition of a few more online repair functions.

  Summary:

   - Use extent maps to track pagecache page status instead of
     bufferhead state.

   - Refactor pagecache read and write paths to use the new iomap
     library functions, which enable us to drop the old bufferhead code
     for pagesize == blocksize filesystems.

   - Set up parallel per-block-per-page metadata to track subpage
     information that was tracked by buffer heads, which enables us to
     drop the old bufferhead code for pagesize > blocksize filesystems.

   - Tie a deferred ops control structure to a transaction so that we
     can take advantage of an upper-level dfops without having to plumb
     pointer passing through the code.

   - Refactor the deferred ops code to track deferred ops as part of the
     transaction structure (instead of as a separate data structure) so
     that we can simplify the scoping rules around defer_ops.

   - Refactor twisty delwri buffer submission code to avoid deadlocks.

   - Shorten and fix indenting problems in the scrub code.

   - Detect obviously bad summary counts at mount and fix them.

   - Directly associate deferred ops control structure with a
     transaction so that callers no longer have to manage it themselves.

   - Remove a couple of IRIX-era inode macros.

   - Remove the long-deprecated 'barrier' and 'nobarrier' mount options.

   - Clean up the inode fork structure a bit.

   - Check for bad fs summary counter values in the superblock.

   - Reduce COW fork lookups during writeback.

   - Refactor the deferred ops control structures into the transaction
     structure, thereby eliminating the need for transaction users to
     handle the deferred ops as a separate data structure.

   - Add the ability to repair AG headers online.

   - Fix a crash due to insufficient return value checking.

   - Various fixes and cleanups"

* tag 'xfs-4.19-merge-6' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: (155 commits)
  xfs: fix a null pointer dereference in xfs_bmap_extents_to_btree
  xfs: remove b_last_holder & associated macros
  iomap: Switch to offset_in_page for clarity
  xfs: Close race between direct IO and xfs_break_layouts()
  xfs: repair the AGI
  xfs: repair the AGFL
  xfs: repair the AGF
  xfs: remove dead error handling code in xfs_dquot_disk_alloc()
  xfs: use WRITE_ONCE to update if_seq
  xfs: fix a comment in xfs_log_reserve
  xfs: only validate summary counts on primary superblock
  xfs: substitute spaces with tabs
  xfs: fold dfops into the transaction
  xfs: always defer agfl block frees
  xfs: pass transaction to xfs_defer_add()
  xfs: replace xfs_defer_ops ->dop_pending with on-stack list
  xfs: cancel dfops on xfs_defer_finish() error
  xfs: clean out superfluous dfops dop params/vars
  xfs: drop dop param from xfs_defer_op_type ->finish_item() callback
  xfs: automatic dfops inode relogging
  ...
This commit is contained in:
Linus Torvalds 2018-08-14 08:56:02 -07:00
Родитель 10f3e23f07 01239d77b9
Коммит 781fca5b10
119 изменённых файлов: 5684 добавлений и 5345 удалений

Просмотреть файл

@ -223,8 +223,6 @@ Deprecated Mount Options
Name Removal Schedule
---- ----------------
barrier no earlier than v4.15
nobarrier no earlier than v4.15
Removed Mount Options
@ -236,6 +234,8 @@ Removed Mount Options
ihashsize v4.0
irixsgid v4.0
osyncisdsync/osyncisosync v4.0
barrier v4.19
nobarrier v4.19
sysctls

Просмотреть файл

@ -17,6 +17,7 @@
#include <linux/iomap.h>
#include <linux/uaccess.h>
#include <linux/gfp.h>
#include <linux/migrate.h>
#include <linux/mm.h>
#include <linux/mm_inline.h>
#include <linux/swap.h>
@ -104,6 +105,138 @@ iomap_sector(struct iomap *iomap, loff_t pos)
return (iomap->addr + pos - iomap->offset) >> SECTOR_SHIFT;
}
static struct iomap_page *
iomap_page_create(struct inode *inode, struct page *page)
{
struct iomap_page *iop = to_iomap_page(page);
if (iop || i_blocksize(inode) == PAGE_SIZE)
return iop;
iop = kmalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL);
atomic_set(&iop->read_count, 0);
atomic_set(&iop->write_count, 0);
bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE);
set_page_private(page, (unsigned long)iop);
SetPagePrivate(page);
return iop;
}
static void
iomap_page_release(struct page *page)
{
struct iomap_page *iop = to_iomap_page(page);
if (!iop)
return;
WARN_ON_ONCE(atomic_read(&iop->read_count));
WARN_ON_ONCE(atomic_read(&iop->write_count));
ClearPagePrivate(page);
set_page_private(page, 0);
kfree(iop);
}
/*
* Calculate the range inside the page that we actually need to read.
*/
static void
iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop,
loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp)
{
unsigned block_bits = inode->i_blkbits;
unsigned block_size = (1 << block_bits);
unsigned poff = offset_in_page(*pos);
unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length);
unsigned first = poff >> block_bits;
unsigned last = (poff + plen - 1) >> block_bits;
unsigned end = offset_in_page(i_size_read(inode)) >> block_bits;
/*
* If the block size is smaller than the page size we need to check the
* per-block uptodate status and adjust the offset and length if needed
* to avoid reading in already uptodate ranges.
*/
if (iop) {
unsigned int i;
/* move forward for each leading block marked uptodate */
for (i = first; i <= last; i++) {
if (!test_bit(i, iop->uptodate))
break;
*pos += block_size;
poff += block_size;
plen -= block_size;
first++;
}
/* truncate len if we find any trailing uptodate block(s) */
for ( ; i <= last; i++) {
if (test_bit(i, iop->uptodate)) {
plen -= (last - i + 1) * block_size;
last = i - 1;
break;
}
}
}
/*
* If the extent spans the block that contains the i_size we need to
* handle both halves separately so that we properly zero data in the
* page cache for blocks that are entirely outside of i_size.
*/
if (first <= end && last > end)
plen -= (last - end) * block_size;
*offp = poff;
*lenp = plen;
}
static void
iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len)
{
struct iomap_page *iop = to_iomap_page(page);
struct inode *inode = page->mapping->host;
unsigned first = off >> inode->i_blkbits;
unsigned last = (off + len - 1) >> inode->i_blkbits;
unsigned int i;
bool uptodate = true;
if (iop) {
for (i = 0; i < PAGE_SIZE / i_blocksize(inode); i++) {
if (i >= first && i <= last)
set_bit(i, iop->uptodate);
else if (!test_bit(i, iop->uptodate))
uptodate = false;
}
}
if (uptodate && !PageError(page))
SetPageUptodate(page);
}
static void
iomap_read_finish(struct iomap_page *iop, struct page *page)
{
if (!iop || atomic_dec_and_test(&iop->read_count))
unlock_page(page);
}
static void
iomap_read_page_end_io(struct bio_vec *bvec, int error)
{
struct page *page = bvec->bv_page;
struct iomap_page *iop = to_iomap_page(page);
if (unlikely(error)) {
ClearPageUptodate(page);
SetPageError(page);
} else {
iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len);
}
iomap_read_finish(iop, page);
}
static void
iomap_read_inline_data(struct inode *inode, struct page *page,
struct iomap *iomap)
@ -132,7 +265,7 @@ iomap_read_end_io(struct bio *bio)
int i;
bio_for_each_segment_all(bvec, bio, i)
page_endio(bvec->bv_page, false, error);
iomap_read_page_end_io(bvec, error);
bio_put(bio);
}
@ -150,9 +283,10 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
{
struct iomap_readpage_ctx *ctx = data;
struct page *page = ctx->cur_page;
unsigned poff = pos & (PAGE_SIZE - 1);
unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length);
struct iomap_page *iop = iomap_page_create(inode, page);
bool is_contig = false;
loff_t orig_pos = pos;
unsigned poff, plen;
sector_t sector;
if (iomap->type == IOMAP_INLINE) {
@ -161,13 +295,14 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
return PAGE_SIZE;
}
/* we don't support blocksize < PAGE_SIZE quite yet. */
WARN_ON_ONCE(pos != page_offset(page));
WARN_ON_ONCE(plen != PAGE_SIZE);
/* zero post-eof blocks as the page may be mapped */
iomap_adjust_read_range(inode, iop, &pos, length, &poff, &plen);
if (plen == 0)
goto done;
if (iomap->type != IOMAP_MAPPED || pos >= i_size_read(inode)) {
zero_user(page, poff, plen);
SetPageUptodate(page);
iomap_set_range_uptodate(page, poff, plen);
goto done;
}
@ -183,6 +318,14 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
is_contig = true;
}
/*
* If we start a new segment we need to increase the read count, and we
* need to do so before submitting any previous full bio to make sure
* that we don't prematurely unlock the page.
*/
if (iop)
atomic_inc(&iop->read_count);
if (!ctx->bio || !is_contig || bio_full(ctx->bio)) {
gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL);
int nr_vecs = (length + PAGE_SIZE - 1) >> PAGE_SHIFT;
@ -203,7 +346,13 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
__bio_add_page(ctx->bio, page, plen, poff);
done:
return plen;
/*
* Move the caller beyond our range so that it keeps making progress.
* For that we have to include any leading non-uptodate ranges, but
* we can skip trailing ones as they will be handled in the next
* iteration.
*/
return pos - orig_pos + plen;
}
int
@ -214,8 +363,6 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops)
unsigned poff;
loff_t ret;
WARN_ON_ONCE(page_has_buffers(page));
for (poff = 0; poff < PAGE_SIZE; poff += ret) {
ret = iomap_apply(inode, page_offset(page) + poff,
PAGE_SIZE - poff, 0, ops, &ctx,
@ -280,7 +427,7 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
loff_t done, ret;
for (done = 0; done < length; done += ret) {
if (ctx->cur_page && ((pos + done) & (PAGE_SIZE - 1)) == 0) {
if (ctx->cur_page && offset_in_page(pos + done) == 0) {
if (!ctx->cur_page_in_bio)
unlock_page(ctx->cur_page);
put_page(ctx->cur_page);
@ -341,6 +488,84 @@ done:
}
EXPORT_SYMBOL_GPL(iomap_readpages);
int
iomap_is_partially_uptodate(struct page *page, unsigned long from,
unsigned long count)
{
struct iomap_page *iop = to_iomap_page(page);
struct inode *inode = page->mapping->host;
unsigned first = from >> inode->i_blkbits;
unsigned last = (from + count - 1) >> inode->i_blkbits;
unsigned i;
if (iop) {
for (i = first; i <= last; i++)
if (!test_bit(i, iop->uptodate))
return 0;
return 1;
}
return 0;
}
EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate);
int
iomap_releasepage(struct page *page, gfp_t gfp_mask)
{
/*
* mm accommodates an old ext3 case where clean pages might not have had
* the dirty bit cleared. Thus, it can send actual dirty pages to
* ->releasepage() via shrink_active_list(), skip those here.
*/
if (PageDirty(page) || PageWriteback(page))
return 0;
iomap_page_release(page);
return 1;
}
EXPORT_SYMBOL_GPL(iomap_releasepage);
void
iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len)
{
/*
* If we are invalidating the entire page, clear the dirty state from it
* and release it to avoid unnecessary buildup of the LRU.
*/
if (offset == 0 && len == PAGE_SIZE) {
WARN_ON_ONCE(PageWriteback(page));
cancel_dirty_page(page);
iomap_page_release(page);
}
}
EXPORT_SYMBOL_GPL(iomap_invalidatepage);
#ifdef CONFIG_MIGRATION
int
iomap_migrate_page(struct address_space *mapping, struct page *newpage,
struct page *page, enum migrate_mode mode)
{
int ret;
ret = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0);
if (ret != MIGRATEPAGE_SUCCESS)
return ret;
if (page_has_private(page)) {
ClearPagePrivate(page);
set_page_private(newpage, page_private(page));
set_page_private(page, 0);
SetPagePrivate(newpage);
}
if (mode != MIGRATE_SYNC_NO_COPY)
migrate_page_copy(newpage, page);
else
migrate_page_states(newpage, page);
return MIGRATEPAGE_SUCCESS;
}
EXPORT_SYMBOL_GPL(iomap_migrate_page);
#endif /* CONFIG_MIGRATION */
static void
iomap_write_failed(struct inode *inode, loff_t pos, unsigned len)
{
@ -364,6 +589,7 @@ iomap_read_page_sync(struct inode *inode, loff_t block_start, struct page *page,
if (iomap->type != IOMAP_MAPPED || block_start >= i_size_read(inode)) {
zero_user_segments(page, poff, from, to, poff + plen);
iomap_set_range_uptodate(page, poff, plen);
return 0;
}
@ -379,21 +605,33 @@ static int
__iomap_write_begin(struct inode *inode, loff_t pos, unsigned len,
struct page *page, struct iomap *iomap)
{
struct iomap_page *iop = iomap_page_create(inode, page);
loff_t block_size = i_blocksize(inode);
loff_t block_start = pos & ~(block_size - 1);
loff_t block_end = (pos + len + block_size - 1) & ~(block_size - 1);
unsigned poff = block_start & (PAGE_SIZE - 1);
unsigned plen = min_t(loff_t, PAGE_SIZE - poff, block_end - block_start);
unsigned from = pos & (PAGE_SIZE - 1), to = from + len;
WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE);
unsigned from = offset_in_page(pos), to = from + len, poff, plen;
int status = 0;
if (PageUptodate(page))
return 0;
if (from <= poff && to >= poff + plen)
return 0;
return iomap_read_page_sync(inode, block_start, page,
poff, plen, from, to, iomap);
do {
iomap_adjust_read_range(inode, iop, &block_start,
block_end - block_start, &poff, &plen);
if (plen == 0)
break;
if ((from > poff && from < poff + plen) ||
(to > poff && to < poff + plen)) {
status = iomap_read_page_sync(inode, block_start, page,
poff, plen, from, to, iomap);
if (status)
break;
}
} while ((block_start += plen) < block_end);
return status;
}
static int
@ -476,7 +714,7 @@ __iomap_write_end(struct inode *inode, loff_t pos, unsigned len,
if (unlikely(copied < len && !PageUptodate(page))) {
copied = 0;
} else {
SetPageUptodate(page);
iomap_set_range_uptodate(page, offset_in_page(pos), len);
iomap_set_page_dirty(page);
}
return __generic_write_end(inode, pos, copied, page);
@ -538,7 +776,7 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
unsigned long bytes; /* Bytes to write to page */
size_t copied; /* Bytes copied from user */
offset = (pos & (PAGE_SIZE - 1));
offset = offset_in_page(pos);
bytes = min_t(unsigned long, PAGE_SIZE - offset,
iov_iter_count(i));
again:
@ -652,7 +890,7 @@ iomap_dirty_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
unsigned long offset; /* Offset into pagecache page */
unsigned long bytes; /* Bytes to write to page */
offset = (pos & (PAGE_SIZE - 1));
offset = offset_in_page(pos);
bytes = min_t(loff_t, PAGE_SIZE - offset, length);
rpage = __iomap_read_page(inode, pos);
@ -744,7 +982,7 @@ iomap_zero_range_actor(struct inode *inode, loff_t pos, loff_t count,
do {
unsigned offset, bytes;
offset = pos & (PAGE_SIZE - 1); /* Within page */
offset = offset_in_page(pos);
bytes = min_t(loff_t, PAGE_SIZE - offset, count);
if (IS_DAX(inode))
@ -812,7 +1050,7 @@ iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length,
block_commit_write(page, 0, length);
} else {
WARN_ON_ONCE(!PageUptodate(page));
WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE);
iomap_page_create(inode, page);
}
return length;
@ -837,7 +1075,7 @@ int iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops)
/* page is wholly or partially inside EOF */
if (((page->index + 1) << PAGE_SHIFT) > size)
length = size & ~PAGE_MASK;
length = offset_in_page(size);
else
length = PAGE_SIZE;
@ -1000,7 +1238,7 @@ page_seek_hole_data(struct inode *inode, struct page *page, loff_t *lastoff,
goto out_unlock_not_found;
for (off = 0; off < PAGE_SIZE; off += bsize) {
if ((*lastoff & ~PAGE_MASK) >= off + bsize)
if (offset_in_page(*lastoff) >= off + bsize)
continue;
if (ops->is_partially_uptodate(page, off, bsize) == seek_data) {
unlock_page(page);

Просмотреть файл

@ -158,6 +158,7 @@ xfs-$(CONFIG_XFS_QUOTA) += scrub/quota.o
ifeq ($(CONFIG_XFS_ONLINE_REPAIR),y)
xfs-y += $(addprefix scrub/, \
agheader_repair.o \
bitmap.o \
repair.o \
)
endif

Просмотреть файл

@ -248,7 +248,8 @@ __xfs_ag_resv_init(
/* Create a per-AG block reservation. */
int
xfs_ag_resv_init(
struct xfs_perag *pag)
struct xfs_perag *pag,
struct xfs_trans *tp)
{
struct xfs_mount *mp = pag->pag_mount;
xfs_agnumber_t agno = pag->pag_agno;
@ -260,11 +261,11 @@ xfs_ag_resv_init(
if (pag->pag_meta_resv.ar_asked == 0) {
ask = used = 0;
error = xfs_refcountbt_calc_reserves(mp, agno, &ask, &used);
error = xfs_refcountbt_calc_reserves(mp, tp, agno, &ask, &used);
if (error)
goto out;
error = xfs_finobt_calc_reserves(mp, agno, &ask, &used);
error = xfs_finobt_calc_reserves(mp, tp, agno, &ask, &used);
if (error)
goto out;
@ -282,7 +283,7 @@ xfs_ag_resv_init(
mp->m_inotbt_nores = true;
error = xfs_refcountbt_calc_reserves(mp, agno, &ask,
error = xfs_refcountbt_calc_reserves(mp, tp, agno, &ask,
&used);
if (error)
goto out;
@ -298,7 +299,7 @@ xfs_ag_resv_init(
if (pag->pag_rmapbt_resv.ar_asked == 0) {
ask = used = 0;
error = xfs_rmapbt_calc_reserves(mp, agno, &ask, &used);
error = xfs_rmapbt_calc_reserves(mp, tp, agno, &ask, &used);
if (error)
goto out;
@ -309,7 +310,7 @@ xfs_ag_resv_init(
#ifdef DEBUG
/* need to read in the AGF for the ASSERT below to work */
error = xfs_alloc_pagf_init(pag->pag_mount, NULL, pag->pag_agno, 0);
error = xfs_alloc_pagf_init(pag->pag_mount, tp, pag->pag_agno, 0);
if (error)
return error;

Просмотреть файл

@ -7,7 +7,7 @@
#define __XFS_AG_RESV_H__
int xfs_ag_resv_free(struct xfs_perag *pag);
int xfs_ag_resv_init(struct xfs_perag *pag);
int xfs_ag_resv_init(struct xfs_perag *pag, struct xfs_trans *tp);
bool xfs_ag_resv_critical(struct xfs_perag *pag, enum xfs_ag_resv_type type);
xfs_extlen_t xfs_ag_resv_needed(struct xfs_perag *pag,
@ -28,7 +28,7 @@ xfs_ag_resv_rmapbt_alloc(
struct xfs_mount *mp,
xfs_agnumber_t agno)
{
struct xfs_alloc_arg args = {0};
struct xfs_alloc_arg args = { NULL };
struct xfs_perag *pag;
args.len = 1;

Просмотреть файл

@ -2198,12 +2198,12 @@ xfs_agfl_reset(
*/
STATIC void
xfs_defer_agfl_block(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
xfs_agnumber_t agno,
xfs_fsblock_t agbno,
struct xfs_owner_info *oinfo)
{
struct xfs_mount *mp = tp->t_mountp;
struct xfs_extent_free_item *new; /* new element */
ASSERT(xfs_bmap_free_item_zone != NULL);
@ -2216,7 +2216,7 @@ xfs_defer_agfl_block(
trace_xfs_agfl_free_defer(mp, agno, 0, agbno, 1);
xfs_defer_add(dfops, XFS_DEFER_OPS_TYPE_AGFL_FREE, &new->xefi_list);
xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_AGFL_FREE, &new->xefi_list);
}
/*
@ -2323,16 +2323,8 @@ xfs_alloc_fix_freelist(
if (error)
goto out_agbp_relse;
/* defer agfl frees if dfops is provided */
if (tp->t_agfl_dfops) {
xfs_defer_agfl_block(mp, tp->t_agfl_dfops, args->agno,
bno, &targs.oinfo);
} else {
error = xfs_free_agfl_block(tp, args->agno, bno, agbp,
&targs.oinfo);
if (error)
goto out_agbp_relse;
}
/* defer agfl frees */
xfs_defer_agfl_block(tp, args->agno, bno, &targs.oinfo);
}
targs.tp = tp;
@ -2755,9 +2747,6 @@ xfs_alloc_read_agf(
pag->pagf_levels[XFS_BTNUM_RMAPi] =
be32_to_cpu(agf->agf_levels[XFS_BTNUM_RMAPi]);
pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
spin_lock_init(&pag->pagb_lock);
pag->pagb_count = 0;
pag->pagb_tree = RB_ROOT;
pag->pagf_init = 1;
pag->pagf_agflreset = xfs_agfl_needs_reset(mp, agf);
}
@ -2784,16 +2773,16 @@ xfs_alloc_read_agf(
*/
int /* error */
xfs_alloc_vextent(
xfs_alloc_arg_t *args) /* allocation argument structure */
struct xfs_alloc_arg *args) /* allocation argument structure */
{
xfs_agblock_t agsize; /* allocation group size */
int error;
int flags; /* XFS_ALLOC_FLAG_... locking flags */
xfs_mount_t *mp; /* mount structure pointer */
xfs_agnumber_t sagno; /* starting allocation group number */
xfs_alloctype_t type; /* input allocation type */
int bump_rotor = 0;
xfs_agnumber_t rotorstep = xfs_rotorstep; /* inode32 agf stepper */
xfs_agblock_t agsize; /* allocation group size */
int error;
int flags; /* XFS_ALLOC_FLAG_... locking flags */
struct xfs_mount *mp; /* mount structure pointer */
xfs_agnumber_t sagno; /* starting allocation group number */
xfs_alloctype_t type; /* input allocation type */
int bump_rotor = 0;
xfs_agnumber_t rotorstep = xfs_rotorstep; /* inode32 agf stepper */
mp = args->mp;
type = args->otype = args->type;
@ -2914,7 +2903,7 @@ xfs_alloc_vextent(
* locking of AGF, which might cause deadlock.
*/
if (++(args->agno) == mp->m_sb.sb_agcount) {
if (args->firstblock != NULLFSBLOCK)
if (args->tp->t_firstblock != NULLFSBLOCK)
args->agno = sagno;
else
args->agno = 0;

Просмотреть файл

@ -74,7 +74,6 @@ typedef struct xfs_alloc_arg {
int datatype; /* mask defining data type treatment */
char wasdel; /* set if allocation was prev delayed */
char wasfromfl; /* set if allocation is from freelist */
xfs_fsblock_t firstblock; /* io first block allocated */
struct xfs_owner_info oinfo; /* owner of blocks being allocated */
enum xfs_ag_resv_type resv; /* block reservation to use */
} xfs_alloc_arg_t;

Просмотреть файл

@ -202,9 +202,7 @@ xfs_attr_set(
struct xfs_mount *mp = dp->i_mount;
struct xfs_buf *leaf_bp = NULL;
struct xfs_da_args args;
struct xfs_defer_ops dfops;
struct xfs_trans_res tres;
xfs_fsblock_t firstblock;
int rsvd = (flags & ATTR_ROOT) != 0;
int error, err2, local;
@ -219,8 +217,6 @@ xfs_attr_set(
args.value = value;
args.valuelen = valuelen;
args.firstblock = &firstblock;
args.dfops = &dfops;
args.op_flags = XFS_DA_OP_ADDNAME | XFS_DA_OP_OKNOENT;
args.total = xfs_attr_calc_size(&args, &local);
@ -315,21 +311,18 @@ xfs_attr_set(
* It won't fit in the shortform, transform to a leaf block.
* GROT: another possible req'mt for a double-split btree op.
*/
xfs_defer_init(args.dfops, args.firstblock);
error = xfs_attr_shortform_to_leaf(&args, &leaf_bp);
if (error)
goto out_defer_cancel;
goto out;
/*
* Prevent the leaf buffer from being unlocked so that a
* concurrent AIL push cannot grab the half-baked leaf
* buffer and run into problems with the write verifier.
*/
xfs_trans_bhold(args.trans, leaf_bp);
xfs_defer_bjoin(args.dfops, leaf_bp);
xfs_defer_ijoin(args.dfops, dp);
error = xfs_defer_finish(&args.trans, args.dfops);
error = xfs_defer_finish(&args.trans);
if (error)
goto out_defer_cancel;
goto out;
/*
* Commit the leaf transformation. We'll need another (linked)
@ -369,8 +362,6 @@ xfs_attr_set(
return error;
out_defer_cancel:
xfs_defer_cancel(&dfops);
out:
if (leaf_bp)
xfs_trans_brelse(args.trans, leaf_bp);
@ -392,8 +383,6 @@ xfs_attr_remove(
{
struct xfs_mount *mp = dp->i_mount;
struct xfs_da_args args;
struct xfs_defer_ops dfops;
xfs_fsblock_t firstblock;
int error;
XFS_STATS_INC(mp, xs_attr_remove);
@ -405,9 +394,6 @@ xfs_attr_remove(
if (error)
return error;
args.firstblock = &firstblock;
args.dfops = &dfops;
/*
* we have no control over the attribute names that userspace passes us
* to remove, so we have to allow the name lookup prior to attribute
@ -536,11 +522,12 @@ xfs_attr_shortform_addname(xfs_da_args_t *args)
* if bmap_one_block() says there is only one block (ie: no remote blks).
*/
STATIC int
xfs_attr_leaf_addname(xfs_da_args_t *args)
xfs_attr_leaf_addname(
struct xfs_da_args *args)
{
xfs_inode_t *dp;
struct xfs_buf *bp;
int retval, error, forkoff;
struct xfs_inode *dp;
struct xfs_buf *bp;
int retval, error, forkoff;
trace_xfs_attr_leaf_addname(args);
@ -598,14 +585,12 @@ xfs_attr_leaf_addname(xfs_da_args_t *args)
* Commit that transaction so that the node_addname() call
* can manage its own transactions.
*/
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_attr3_leaf_to_node(args);
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
return error;
/*
* Commit the current trans (including the inode) and start
@ -687,15 +672,13 @@ xfs_attr_leaf_addname(xfs_da_args_t *args)
* If the result is small enough, shrink it all into the inode.
*/
if ((forkoff = xfs_attr_shortform_allfit(bp, dp))) {
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_attr3_leaf_to_shortform(bp, args, forkoff);
/* bp is gone due to xfs_da_shrink_inode */
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
return error;
}
/*
@ -711,7 +694,7 @@ xfs_attr_leaf_addname(xfs_da_args_t *args)
}
return error;
out_defer_cancel:
xfs_defer_cancel(args->dfops);
xfs_defer_cancel(args->trans);
return error;
}
@ -722,11 +705,12 @@ out_defer_cancel:
* if bmap_one_block() says there is only one block (ie: no remote blks).
*/
STATIC int
xfs_attr_leaf_removename(xfs_da_args_t *args)
xfs_attr_leaf_removename(
struct xfs_da_args *args)
{
xfs_inode_t *dp;
struct xfs_buf *bp;
int error, forkoff;
struct xfs_inode *dp;
struct xfs_buf *bp;
int error, forkoff;
trace_xfs_attr_leaf_removename(args);
@ -751,19 +735,17 @@ xfs_attr_leaf_removename(xfs_da_args_t *args)
* If the result is small enough, shrink it all into the inode.
*/
if ((forkoff = xfs_attr_shortform_allfit(bp, dp))) {
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_attr3_leaf_to_shortform(bp, args, forkoff);
/* bp is gone due to xfs_da_shrink_inode */
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
return error;
}
return 0;
out_defer_cancel:
xfs_defer_cancel(args->dfops);
xfs_defer_cancel(args->trans);
return error;
}
@ -814,13 +796,14 @@ xfs_attr_leaf_get(xfs_da_args_t *args)
* add a whole extra layer of confusion on top of that.
*/
STATIC int
xfs_attr_node_addname(xfs_da_args_t *args)
xfs_attr_node_addname(
struct xfs_da_args *args)
{
xfs_da_state_t *state;
xfs_da_state_blk_t *blk;
xfs_inode_t *dp;
xfs_mount_t *mp;
int retval, error;
struct xfs_da_state *state;
struct xfs_da_state_blk *blk;
struct xfs_inode *dp;
struct xfs_mount *mp;
int retval, error;
trace_xfs_attr_node_addname(args);
@ -879,14 +862,12 @@ restart:
*/
xfs_da_state_free(state);
state = NULL;
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_attr3_leaf_to_node(args);
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
goto out;
/*
* Commit the node conversion and start the next
@ -905,14 +886,12 @@ restart:
* in the index/blkno/rmtblkno/rmtblkcnt fields and
* in the index2/blkno2/rmtblkno2/rmtblkcnt2 fields.
*/
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_da3_split(state);
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
goto out;
} else {
/*
* Addition succeeded, update Btree hashvals.
@ -1003,14 +982,12 @@ restart:
* Check to see if the tree needs to be collapsed.
*/
if (retval && (state->path.active > 1)) {
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_da3_join(state);
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
goto out;
}
/*
@ -1037,7 +1014,7 @@ out:
return error;
return retval;
out_defer_cancel:
xfs_defer_cancel(args->dfops);
xfs_defer_cancel(args->trans);
goto out;
}
@ -1049,13 +1026,14 @@ out_defer_cancel:
* the root node (a special case of an intermediate node).
*/
STATIC int
xfs_attr_node_removename(xfs_da_args_t *args)
xfs_attr_node_removename(
struct xfs_da_args *args)
{
xfs_da_state_t *state;
xfs_da_state_blk_t *blk;
xfs_inode_t *dp;
struct xfs_buf *bp;
int retval, error, forkoff;
struct xfs_da_state *state;
struct xfs_da_state_blk *blk;
struct xfs_inode *dp;
struct xfs_buf *bp;
int retval, error, forkoff;
trace_xfs_attr_node_removename(args);
@ -1127,14 +1105,12 @@ xfs_attr_node_removename(xfs_da_args_t *args)
* Check to see if the tree needs to be collapsed.
*/
if (retval && (state->path.active > 1)) {
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_da3_join(state);
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
goto out;
/*
* Commit the Btree join operation and start a new trans.
*/
@ -1159,15 +1135,13 @@ xfs_attr_node_removename(xfs_da_args_t *args)
goto out;
if ((forkoff = xfs_attr_shortform_allfit(bp, dp))) {
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_attr3_leaf_to_shortform(bp, args, forkoff);
/* bp is gone due to xfs_da_shrink_inode */
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
goto out;
} else
xfs_trans_brelse(args->trans, bp);
}
@ -1177,7 +1151,7 @@ out:
xfs_da_state_free(state);
return error;
out_defer_cancel:
xfs_defer_cancel(args->dfops);
xfs_defer_cancel(args->trans);
goto out;
}

Просмотреть файл

@ -242,8 +242,9 @@ xfs_attr3_leaf_verify(
struct xfs_attr3_icleaf_hdr ichdr;
struct xfs_mount *mp = bp->b_target->bt_mount;
struct xfs_attr_leafblock *leaf = bp->b_addr;
struct xfs_perag *pag = bp->b_pag;
struct xfs_attr_leaf_entry *entries;
uint16_t end;
int i;
xfs_attr3_leaf_hdr_from_disk(mp->m_attr_geo, &ichdr, leaf);
@ -268,7 +269,7 @@ xfs_attr3_leaf_verify(
* because we may have transitioned an empty shortform attr to a leaf
* if the attr didn't fit in shortform.
*/
if (pag && pag->pagf_init && ichdr.count == 0)
if (!xfs_log_in_recovery(mp) && ichdr.count == 0)
return __this_address;
/*
@ -289,6 +290,26 @@ xfs_attr3_leaf_verify(
/* XXX: need to range check rest of attr header values */
/* XXX: hash order check? */
/*
* Quickly check the freemap information. Attribute data has to be
* aligned to 4-byte boundaries, and likewise for the free space.
*/
for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) {
if (ichdr.freemap[i].base > mp->m_attr_geo->blksize)
return __this_address;
if (ichdr.freemap[i].base & 0x3)
return __this_address;
if (ichdr.freemap[i].size > mp->m_attr_geo->blksize)
return __this_address;
if (ichdr.freemap[i].size & 0x3)
return __this_address;
end = ichdr.freemap[i].base + ichdr.freemap[i].size;
if (end < ichdr.freemap[i].base)
return __this_address;
if (end > mp->m_attr_geo->blksize)
return __this_address;
}
return NULL;
}
@ -506,7 +527,7 @@ xfs_attr_shortform_create(xfs_da_args_t *args)
{
xfs_attr_sf_hdr_t *hdr;
xfs_inode_t *dp;
xfs_ifork_t *ifp;
struct xfs_ifork *ifp;
trace_xfs_attr_sf_create(args);
@ -541,7 +562,7 @@ xfs_attr_shortform_add(xfs_da_args_t *args, int forkoff)
int i, offset, size;
xfs_mount_t *mp;
xfs_inode_t *dp;
xfs_ifork_t *ifp;
struct xfs_ifork *ifp;
trace_xfs_attr_sf_add(args);
@ -682,7 +703,7 @@ xfs_attr_shortform_lookup(xfs_da_args_t *args)
xfs_attr_shortform_t *sf;
xfs_attr_sf_entry_t *sfe;
int i;
xfs_ifork_t *ifp;
struct xfs_ifork *ifp;
trace_xfs_attr_sf_lookup(args);
@ -747,18 +768,18 @@ xfs_attr_shortform_getvalue(xfs_da_args_t *args)
*/
int
xfs_attr_shortform_to_leaf(
struct xfs_da_args *args,
struct xfs_buf **leaf_bp)
struct xfs_da_args *args,
struct xfs_buf **leaf_bp)
{
xfs_inode_t *dp;
xfs_attr_shortform_t *sf;
xfs_attr_sf_entry_t *sfe;
xfs_da_args_t nargs;
char *tmpbuffer;
int error, i, size;
xfs_dablk_t blkno;
struct xfs_buf *bp;
xfs_ifork_t *ifp;
struct xfs_inode *dp;
struct xfs_attr_shortform *sf;
struct xfs_attr_sf_entry *sfe;
struct xfs_da_args nargs;
char *tmpbuffer;
int error, i, size;
xfs_dablk_t blkno;
struct xfs_buf *bp;
struct xfs_ifork *ifp;
trace_xfs_attr_sf_to_leaf(args);
@ -802,8 +823,6 @@ xfs_attr_shortform_to_leaf(
memset((char *)&nargs, 0, sizeof(nargs));
nargs.dp = dp;
nargs.geo = args->geo;
nargs.firstblock = args->firstblock;
nargs.dfops = args->dfops;
nargs.total = args->total;
nargs.whichfork = XFS_ATTR_FORK;
nargs.trans = args->trans;
@ -1006,8 +1025,6 @@ xfs_attr3_leaf_to_shortform(
memset((char *)&nargs, 0, sizeof(nargs));
nargs.geo = args->geo;
nargs.dp = dp;
nargs.firstblock = args->firstblock;
nargs.dfops = args->dfops;
nargs.total = args->total;
nargs.whichfork = XFS_ATTR_FORK;
nargs.trans = args->trans;
@ -1570,17 +1587,10 @@ xfs_attr3_leaf_rebalance(
*/
swap = 0;
if (xfs_attr3_leaf_order(blk1->bp, &ichdr1, blk2->bp, &ichdr2)) {
struct xfs_da_state_blk *tmp_blk;
struct xfs_attr3_icleaf_hdr tmp_ichdr;
swap(blk1, blk2);
tmp_blk = blk1;
blk1 = blk2;
blk2 = tmp_blk;
/* struct copies to swap them rather than reconverting */
tmp_ichdr = ichdr1;
ichdr1 = ichdr2;
ichdr2 = tmp_ichdr;
/* swap structures rather than reconverting them */
swap(ichdr1, ichdr2);
leaf1 = blk1->bp->b_addr;
leaf2 = blk2->bp->b_addr;

Просмотреть файл

@ -480,17 +480,15 @@ xfs_attr_rmtval_set(
* extent and then crash then the block may not contain the
* correct metadata after log recovery occurs.
*/
xfs_defer_init(args->dfops, args->firstblock);
nmap = 1;
error = xfs_bmapi_write(args->trans, dp, (xfs_fileoff_t)lblkno,
blkcnt, XFS_BMAPI_ATTRFORK, args->firstblock,
args->total, &map, &nmap, args->dfops);
blkcnt, XFS_BMAPI_ATTRFORK, args->total, &map,
&nmap);
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
return error;
ASSERT(nmap == 1);
ASSERT((map.br_startblock != DELAYSTARTBLOCK) &&
@ -522,7 +520,6 @@ xfs_attr_rmtval_set(
ASSERT(blkcnt > 0);
xfs_defer_init(args->dfops, args->firstblock);
nmap = 1;
error = xfs_bmapi_read(dp, (xfs_fileoff_t)lblkno,
blkcnt, &map, &nmap,
@ -557,8 +554,7 @@ xfs_attr_rmtval_set(
ASSERT(valuelen == 0);
return 0;
out_defer_cancel:
xfs_defer_cancel(args->dfops);
args->trans = NULL;
xfs_defer_cancel(args->trans);
return error;
}
@ -626,16 +622,13 @@ xfs_attr_rmtval_remove(
blkcnt = args->rmtblkcnt;
done = 0;
while (!done) {
xfs_defer_init(args->dfops, args->firstblock);
error = xfs_bunmapi(args->trans, args->dp, lblkno, blkcnt,
XFS_BMAPI_ATTRFORK, 1, args->firstblock,
args->dfops, &done);
XFS_BMAPI_ATTRFORK, 1, &done);
if (error)
goto out_defer_cancel;
xfs_defer_ijoin(args->dfops, args->dp);
error = xfs_defer_finish(&args->trans, args->dfops);
error = xfs_defer_finish(&args->trans);
if (error)
goto out_defer_cancel;
return error;
/*
* Close out trans and start the next one in the chain.
@ -646,7 +639,6 @@ xfs_attr_rmtval_remove(
}
return 0;
out_defer_cancel:
xfs_defer_cancel(args->dfops);
args->trans = NULL;
xfs_defer_cancel(args->trans);
return error;
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -19,8 +19,6 @@ extern kmem_zone_t *xfs_bmap_free_item_zone;
* Argument structure for xfs_bmap_alloc.
*/
struct xfs_bmalloca {
xfs_fsblock_t *firstblock; /* i/o first block allocated */
struct xfs_defer_ops *dfops; /* bmap freelist */
struct xfs_trans *tp; /* transaction pointer */
struct xfs_inode *ip; /* incore inode pointer */
struct xfs_bmbt_irec prev; /* extent before the new one */
@ -68,8 +66,6 @@ struct xfs_extent_free_item
#define XFS_BMAPI_METADATA 0x002 /* mapping metadata not user data */
#define XFS_BMAPI_ATTRFORK 0x004 /* use attribute fork not data */
#define XFS_BMAPI_PREALLOC 0x008 /* preallocation op: unwritten space */
#define XFS_BMAPI_IGSTATE 0x010 /* Ignore state - */
/* combine contig. space */
#define XFS_BMAPI_CONTIG 0x020 /* must allocate only one extent */
/*
* unwritten extent conversion - this needs write cache flushing and no additional
@ -116,7 +112,6 @@ struct xfs_extent_free_item
{ XFS_BMAPI_METADATA, "METADATA" }, \
{ XFS_BMAPI_ATTRFORK, "ATTRFORK" }, \
{ XFS_BMAPI_PREALLOC, "PREALLOC" }, \
{ XFS_BMAPI_IGSTATE, "IGSTATE" }, \
{ XFS_BMAPI_CONTIG, "CONTIG" }, \
{ XFS_BMAPI_CONVERT, "CONVERT" }, \
{ XFS_BMAPI_ZERO, "ZERO" }, \
@ -189,9 +184,9 @@ void xfs_trim_extent(struct xfs_bmbt_irec *irec, xfs_fileoff_t bno,
void xfs_trim_extent_eof(struct xfs_bmbt_irec *, struct xfs_inode *);
int xfs_bmap_add_attrfork(struct xfs_inode *ip, int size, int rsvd);
void xfs_bmap_local_to_extents_empty(struct xfs_inode *ip, int whichfork);
void __xfs_bmap_add_free(struct xfs_mount *mp, struct xfs_defer_ops *dfops,
xfs_fsblock_t bno, xfs_filblks_t len,
struct xfs_owner_info *oinfo, bool skip_discard);
void __xfs_bmap_add_free(struct xfs_trans *tp, xfs_fsblock_t bno,
xfs_filblks_t len, struct xfs_owner_info *oinfo,
bool skip_discard);
void xfs_bmap_compute_maxlevels(struct xfs_mount *mp, int whichfork);
int xfs_bmap_first_unused(struct xfs_trans *tp, struct xfs_inode *ip,
xfs_extlen_t len, xfs_fileoff_t *unused, int whichfork);
@ -205,17 +200,13 @@ int xfs_bmapi_read(struct xfs_inode *ip, xfs_fileoff_t bno,
int *nmap, int flags);
int xfs_bmapi_write(struct xfs_trans *tp, struct xfs_inode *ip,
xfs_fileoff_t bno, xfs_filblks_t len, int flags,
xfs_fsblock_t *firstblock, xfs_extlen_t total,
struct xfs_bmbt_irec *mval, int *nmap,
struct xfs_defer_ops *dfops);
xfs_extlen_t total, struct xfs_bmbt_irec *mval, int *nmap);
int __xfs_bunmapi(struct xfs_trans *tp, struct xfs_inode *ip,
xfs_fileoff_t bno, xfs_filblks_t *rlen, int flags,
xfs_extnum_t nexts, xfs_fsblock_t *firstblock,
struct xfs_defer_ops *dfops);
xfs_extnum_t nexts);
int xfs_bunmapi(struct xfs_trans *tp, struct xfs_inode *ip,
xfs_fileoff_t bno, xfs_filblks_t len, int flags,
xfs_extnum_t nexts, xfs_fsblock_t *firstblock,
struct xfs_defer_ops *dfops, int *done);
xfs_extnum_t nexts, int *done);
int xfs_bmap_del_extent_delay(struct xfs_inode *ip, int whichfork,
struct xfs_iext_cursor *cur, struct xfs_bmbt_irec *got,
struct xfs_bmbt_irec *del);
@ -225,14 +216,12 @@ void xfs_bmap_del_extent_cow(struct xfs_inode *ip,
uint xfs_default_attroffset(struct xfs_inode *ip);
int xfs_bmap_collapse_extents(struct xfs_trans *tp, struct xfs_inode *ip,
xfs_fileoff_t *next_fsb, xfs_fileoff_t offset_shift_fsb,
bool *done, xfs_fsblock_t *firstblock,
struct xfs_defer_ops *dfops);
bool *done);
int xfs_bmap_can_insert_extents(struct xfs_inode *ip, xfs_fileoff_t off,
xfs_fileoff_t shift);
int xfs_bmap_insert_extents(struct xfs_trans *tp, struct xfs_inode *ip,
xfs_fileoff_t *next_fsb, xfs_fileoff_t offset_shift_fsb,
bool *done, xfs_fileoff_t stop_fsb, xfs_fsblock_t *firstblock,
struct xfs_defer_ops *dfops);
bool *done, xfs_fileoff_t stop_fsb);
int xfs_bmap_split_extent(struct xfs_inode *ip, xfs_fileoff_t split_offset);
int xfs_bmapi_reserve_delalloc(struct xfs_inode *ip, int whichfork,
xfs_fileoff_t off, xfs_filblks_t len, xfs_filblks_t prealloc,
@ -241,13 +230,12 @@ int xfs_bmapi_reserve_delalloc(struct xfs_inode *ip, int whichfork,
static inline void
xfs_bmap_add_free(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
xfs_fsblock_t bno,
xfs_filblks_t len,
struct xfs_owner_info *oinfo)
{
__xfs_bmap_add_free(mp, dfops, bno, len, oinfo, false);
__xfs_bmap_add_free(tp, bno, len, oinfo, false);
}
enum xfs_bmap_intent_type {
@ -263,14 +251,14 @@ struct xfs_bmap_intent {
struct xfs_bmbt_irec bi_bmap;
};
int xfs_bmap_finish_one(struct xfs_trans *tp, struct xfs_defer_ops *dfops,
struct xfs_inode *ip, enum xfs_bmap_intent_type type,
int whichfork, xfs_fileoff_t startoff, xfs_fsblock_t startblock,
int xfs_bmap_finish_one(struct xfs_trans *tp, struct xfs_inode *ip,
enum xfs_bmap_intent_type type, int whichfork,
xfs_fileoff_t startoff, xfs_fsblock_t startblock,
xfs_filblks_t *blockcount, xfs_exntst_t state);
int xfs_bmap_map_extent(struct xfs_mount *mp, struct xfs_defer_ops *dfops,
struct xfs_inode *ip, struct xfs_bmbt_irec *imap);
int xfs_bmap_unmap_extent(struct xfs_mount *mp, struct xfs_defer_ops *dfops,
struct xfs_inode *ip, struct xfs_bmbt_irec *imap);
int xfs_bmap_map_extent(struct xfs_trans *tp, struct xfs_inode *ip,
struct xfs_bmbt_irec *imap);
int xfs_bmap_unmap_extent(struct xfs_trans *tp, struct xfs_inode *ip,
struct xfs_bmbt_irec *imap);
static inline int xfs_bmap_fork_to_state(int whichfork)
{
@ -289,6 +277,6 @@ xfs_failaddr_t xfs_bmap_validate_extent(struct xfs_inode *ip, int whichfork,
int xfs_bmapi_remap(struct xfs_trans *tp, struct xfs_inode *ip,
xfs_fileoff_t bno, xfs_filblks_t len, xfs_fsblock_t startblock,
struct xfs_defer_ops *dfops, int flags);
int flags);
#endif /* __XFS_BMAP_H__ */

Просмотреть файл

@ -175,8 +175,6 @@ xfs_bmbt_dup_cursor(
* Copy the firstblock, dfops, and flags values,
* since init cursor doesn't get them.
*/
new->bc_private.b.firstblock = cur->bc_private.b.firstblock;
new->bc_private.b.dfops = cur->bc_private.b.dfops;
new->bc_private.b.flags = cur->bc_private.b.flags;
return new;
@ -187,12 +185,11 @@ xfs_bmbt_update_cursor(
struct xfs_btree_cur *src,
struct xfs_btree_cur *dst)
{
ASSERT((dst->bc_private.b.firstblock != NULLFSBLOCK) ||
ASSERT((dst->bc_tp->t_firstblock != NULLFSBLOCK) ||
(dst->bc_private.b.ip->i_d.di_flags & XFS_DIFLAG_REALTIME));
ASSERT(dst->bc_private.b.dfops == src->bc_private.b.dfops);
dst->bc_private.b.allocated += src->bc_private.b.allocated;
dst->bc_private.b.firstblock = src->bc_private.b.firstblock;
dst->bc_tp->t_firstblock = src->bc_tp->t_firstblock;
src->bc_private.b.allocated = 0;
}
@ -210,8 +207,7 @@ xfs_bmbt_alloc_block(
memset(&args, 0, sizeof(args));
args.tp = cur->bc_tp;
args.mp = cur->bc_mp;
args.fsbno = cur->bc_private.b.firstblock;
args.firstblock = args.fsbno;
args.fsbno = cur->bc_tp->t_firstblock;
xfs_rmap_ino_bmbt_owner(&args.oinfo, cur->bc_private.b.ip->i_ino,
cur->bc_private.b.whichfork);
@ -230,7 +226,7 @@ xfs_bmbt_alloc_block(
* block allocation here and corrupt the filesystem.
*/
args.minleft = args.tp->t_blk_res;
} else if (cur->bc_private.b.dfops->dop_low) {
} else if (cur->bc_tp->t_flags & XFS_TRANS_LOWMODE) {
args.type = XFS_ALLOCTYPE_START_BNO;
} else {
args.type = XFS_ALLOCTYPE_NEAR_BNO;
@ -257,7 +253,7 @@ xfs_bmbt_alloc_block(
error = xfs_alloc_vextent(&args);
if (error)
goto error0;
cur->bc_private.b.dfops->dop_low = true;
cur->bc_tp->t_flags |= XFS_TRANS_LOWMODE;
}
if (WARN_ON_ONCE(args.fsbno == NULLFSBLOCK)) {
*stat = 0;
@ -265,7 +261,7 @@ xfs_bmbt_alloc_block(
}
ASSERT(args.len == 1);
cur->bc_private.b.firstblock = args.fsbno;
cur->bc_tp->t_firstblock = args.fsbno;
cur->bc_private.b.allocated++;
cur->bc_private.b.ip->i_d.di_nblocks++;
xfs_trans_log_inode(args.tp, cur->bc_private.b.ip, XFS_ILOG_CORE);
@ -293,7 +289,7 @@ xfs_bmbt_free_block(
struct xfs_owner_info oinfo;
xfs_rmap_ino_bmbt_owner(&oinfo, ip->i_ino, cur->bc_private.b.whichfork);
xfs_bmap_add_free(mp, cur->bc_private.b.dfops, fsbno, 1, &oinfo);
xfs_bmap_add_free(cur->bc_tp, fsbno, 1, &oinfo);
ip->i_d.di_nblocks--;
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
@ -564,8 +560,6 @@ xfs_bmbt_init_cursor(
cur->bc_private.b.forksize = XFS_IFORK_SIZE(ip, whichfork);
cur->bc_private.b.ip = ip;
cur->bc_private.b.firstblock = NULLFSBLOCK;
cur->bc_private.b.dfops = NULL;
cur->bc_private.b.allocated = 0;
cur->bc_private.b.flags = 0;
cur->bc_private.b.whichfork = whichfork;
@ -645,7 +639,7 @@ xfs_bmbt_change_owner(
cur->bc_private.b.flags |= XFS_BTCUR_BPRV_INVALID_OWNER;
error = xfs_btree_change_owner(cur, new_owner, buffer_list);
xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_btree_del_cursor(cur, error);
return error;
}

Просмотреть файл

@ -7,7 +7,6 @@
#define __XFS_BTREE_H__
struct xfs_buf;
struct xfs_defer_ops;
struct xfs_inode;
struct xfs_mount;
struct xfs_trans;
@ -209,14 +208,11 @@ typedef struct xfs_btree_cur
union {
struct { /* needed for BNO, CNT, INO */
struct xfs_buf *agbp; /* agf/agi buffer pointer */
struct xfs_defer_ops *dfops; /* deferred updates */
xfs_agnumber_t agno; /* ag number */
union xfs_btree_cur_private priv;
} a;
struct { /* needed for BMAP */
struct xfs_inode *ip; /* pointer to our inode */
struct xfs_defer_ops *dfops; /* deferred updates */
xfs_fsblock_t firstblock; /* 1st blk allocated */
int allocated; /* count of alloced */
short forksize; /* fork's inode space */
char whichfork; /* data or attr fork */

Просмотреть файл

@ -1481,6 +1481,7 @@ xfs_da3_node_lookup_int(
int error;
int retval;
unsigned int expected_level = 0;
uint16_t magic;
struct xfs_inode *dp = state->args->dp;
args = state->args;
@ -1505,25 +1506,27 @@ xfs_da3_node_lookup_int(
return error;
}
curr = blk->bp->b_addr;
blk->magic = be16_to_cpu(curr->magic);
magic = be16_to_cpu(curr->magic);
if (blk->magic == XFS_ATTR_LEAF_MAGIC ||
blk->magic == XFS_ATTR3_LEAF_MAGIC) {
if (magic == XFS_ATTR_LEAF_MAGIC ||
magic == XFS_ATTR3_LEAF_MAGIC) {
blk->magic = XFS_ATTR_LEAF_MAGIC;
blk->hashval = xfs_attr_leaf_lasthash(blk->bp, NULL);
break;
}
if (blk->magic == XFS_DIR2_LEAFN_MAGIC ||
blk->magic == XFS_DIR3_LEAFN_MAGIC) {
if (magic == XFS_DIR2_LEAFN_MAGIC ||
magic == XFS_DIR3_LEAFN_MAGIC) {
blk->magic = XFS_DIR2_LEAFN_MAGIC;
blk->hashval = xfs_dir2_leaf_lasthash(args->dp,
blk->bp, NULL);
break;
}
blk->magic = XFS_DA_NODE_MAGIC;
if (magic != XFS_DA_NODE_MAGIC && magic != XFS_DA3_NODE_MAGIC)
return -EFSCORRUPTED;
blk->magic = XFS_DA_NODE_MAGIC;
/*
* Search an intermediate node for a match.
@ -2059,11 +2062,9 @@ xfs_da_grow_inode_int(
* Try mapping it in one filesystem block.
*/
nmap = 1;
ASSERT(args->firstblock != NULL);
error = xfs_bmapi_write(tp, dp, *bno, count,
xfs_bmapi_aflag(w)|XFS_BMAPI_METADATA|XFS_BMAPI_CONTIG,
args->firstblock, args->total, &map, &nmap,
args->dfops);
args->total, &map, &nmap);
if (error)
return error;
@ -2085,8 +2086,7 @@ xfs_da_grow_inode_int(
c = (int)(*bno + count - b);
error = xfs_bmapi_write(tp, dp, b, c,
xfs_bmapi_aflag(w)|XFS_BMAPI_METADATA,
args->firstblock, args->total,
&mapp[mapi], &nmap, args->dfops);
args->total, &mapp[mapi], &nmap);
if (error)
goto out_free_map;
if (nmap < 1)
@ -2375,13 +2375,13 @@ done:
*/
int
xfs_da_shrink_inode(
xfs_da_args_t *args,
xfs_dablk_t dead_blkno,
struct xfs_buf *dead_buf)
struct xfs_da_args *args,
xfs_dablk_t dead_blkno,
struct xfs_buf *dead_buf)
{
xfs_inode_t *dp;
int done, error, w, count;
xfs_trans_t *tp;
struct xfs_inode *dp;
int done, error, w, count;
struct xfs_trans *tp;
trace_xfs_da_shrink_inode(args);
@ -2395,8 +2395,7 @@ xfs_da_shrink_inode(
* the last block to the place we want to kill.
*/
error = xfs_bunmapi(tp, dp, dead_blkno, count,
xfs_bmapi_aflag(w), 0, args->firstblock,
args->dfops, &done);
xfs_bmapi_aflag(w), 0, &done);
if (error == -ENOSPC) {
if (w != XFS_DATA_FORK)
break;

Просмотреть файл

@ -7,7 +7,6 @@
#ifndef __XFS_DA_BTREE_H__
#define __XFS_DA_BTREE_H__
struct xfs_defer_ops;
struct xfs_inode;
struct xfs_trans;
struct zone;
@ -57,8 +56,6 @@ typedef struct xfs_da_args {
xfs_dahash_t hashval; /* hash value of name */
xfs_ino_t inumber; /* input/output inode number */
struct xfs_inode *dp; /* directory inode to manipulate */
xfs_fsblock_t *firstblock; /* ptr to firstblock for bmap calls */
struct xfs_defer_ops *dfops; /* ptr to freelist for bmap_finish */
struct xfs_trans *trans; /* current trans (changes over time) */
xfs_extlen_t total; /* total blocks needed, for 1st bmap */
int whichfork; /* data or attribute fork */

Просмотреть файл

@ -14,6 +14,9 @@
#include "xfs_mount.h"
#include "xfs_defer.h"
#include "xfs_trans.h"
#include "xfs_buf_item.h"
#include "xfs_inode.h"
#include "xfs_inode_item.h"
#include "xfs_trace.h"
/*
@ -177,146 +180,157 @@ static const struct xfs_defer_op_type *defer_op_types[XFS_DEFER_OPS_TYPE_MAX];
* the pending list.
*/
STATIC void
xfs_defer_intake_work(
struct xfs_trans *tp,
struct xfs_defer_ops *dop)
xfs_defer_create_intents(
struct xfs_trans *tp)
{
struct list_head *li;
struct xfs_defer_pending *dfp;
list_for_each_entry(dfp, &dop->dop_intake, dfp_list) {
list_for_each_entry(dfp, &tp->t_dfops, dfp_list) {
dfp->dfp_intent = dfp->dfp_type->create_intent(tp,
dfp->dfp_count);
trace_xfs_defer_intake_work(tp->t_mountp, dfp);
trace_xfs_defer_create_intent(tp->t_mountp, dfp);
list_sort(tp->t_mountp, &dfp->dfp_work,
dfp->dfp_type->diff_items);
list_for_each(li, &dfp->dfp_work)
dfp->dfp_type->log_item(tp, dfp->dfp_intent, li);
}
list_splice_tail_init(&dop->dop_intake, &dop->dop_pending);
}
/* Abort all the intents that were committed. */
STATIC void
xfs_defer_trans_abort(
struct xfs_trans *tp,
struct xfs_defer_ops *dop,
int error)
struct list_head *dop_pending)
{
struct xfs_defer_pending *dfp;
trace_xfs_defer_trans_abort(tp->t_mountp, dop, _RET_IP_);
trace_xfs_defer_trans_abort(tp, _RET_IP_);
/* Abort intent items that don't have a done item. */
list_for_each_entry(dfp, &dop->dop_pending, dfp_list) {
list_for_each_entry(dfp, dop_pending, dfp_list) {
trace_xfs_defer_pending_abort(tp->t_mountp, dfp);
if (dfp->dfp_intent && !dfp->dfp_done) {
dfp->dfp_type->abort_intent(dfp->dfp_intent);
dfp->dfp_intent = NULL;
}
}
/* Shut down FS. */
xfs_force_shutdown(tp->t_mountp, (error == -EFSCORRUPTED) ?
SHUTDOWN_CORRUPT_INCORE : SHUTDOWN_META_IO_ERROR);
}
/* Roll a transaction so we can do some deferred op processing. */
STATIC int
xfs_defer_trans_roll(
struct xfs_trans **tp,
struct xfs_defer_ops *dop)
struct xfs_trans **tpp)
{
struct xfs_trans *tp = *tpp;
struct xfs_buf_log_item *bli;
struct xfs_inode_log_item *ili;
struct xfs_log_item *lip;
struct xfs_buf *bplist[XFS_DEFER_OPS_NR_BUFS];
struct xfs_inode *iplist[XFS_DEFER_OPS_NR_INODES];
int bpcount = 0, ipcount = 0;
int i;
int error;
/* Log all the joined inodes. */
for (i = 0; i < XFS_DEFER_OPS_NR_INODES && dop->dop_inodes[i]; i++)
xfs_trans_log_inode(*tp, dop->dop_inodes[i], XFS_ILOG_CORE);
list_for_each_entry(lip, &tp->t_items, li_trans) {
switch (lip->li_type) {
case XFS_LI_BUF:
bli = container_of(lip, struct xfs_buf_log_item,
bli_item);
if (bli->bli_flags & XFS_BLI_HOLD) {
if (bpcount >= XFS_DEFER_OPS_NR_BUFS) {
ASSERT(0);
return -EFSCORRUPTED;
}
xfs_trans_dirty_buf(tp, bli->bli_buf);
bplist[bpcount++] = bli->bli_buf;
}
break;
case XFS_LI_INODE:
ili = container_of(lip, struct xfs_inode_log_item,
ili_item);
if (ili->ili_lock_flags == 0) {
if (ipcount >= XFS_DEFER_OPS_NR_INODES) {
ASSERT(0);
return -EFSCORRUPTED;
}
xfs_trans_log_inode(tp, ili->ili_inode,
XFS_ILOG_CORE);
iplist[ipcount++] = ili->ili_inode;
}
break;
default:
break;
}
}
/* Hold the (previously bjoin'd) buffer locked across the roll. */
for (i = 0; i < XFS_DEFER_OPS_NR_BUFS && dop->dop_bufs[i]; i++)
xfs_trans_dirty_buf(*tp, dop->dop_bufs[i]);
trace_xfs_defer_trans_roll((*tp)->t_mountp, dop, _RET_IP_);
trace_xfs_defer_trans_roll(tp, _RET_IP_);
/* Roll the transaction. */
error = xfs_trans_roll(tp);
error = xfs_trans_roll(tpp);
tp = *tpp;
if (error) {
trace_xfs_defer_trans_roll_error((*tp)->t_mountp, dop, error);
xfs_defer_trans_abort(*tp, dop, error);
trace_xfs_defer_trans_roll_error(tp, error);
return error;
}
dop->dop_committed = true;
/* Rejoin the joined inodes. */
for (i = 0; i < XFS_DEFER_OPS_NR_INODES && dop->dop_inodes[i]; i++)
xfs_trans_ijoin(*tp, dop->dop_inodes[i], 0);
for (i = 0; i < ipcount; i++)
xfs_trans_ijoin(tp, iplist[i], 0);
/* Rejoin the buffers and dirty them so the log moves forward. */
for (i = 0; i < XFS_DEFER_OPS_NR_BUFS && dop->dop_bufs[i]; i++) {
xfs_trans_bjoin(*tp, dop->dop_bufs[i]);
xfs_trans_bhold(*tp, dop->dop_bufs[i]);
for (i = 0; i < bpcount; i++) {
xfs_trans_bjoin(tp, bplist[i]);
xfs_trans_bhold(tp, bplist[i]);
}
return error;
}
/* Do we have any work items to finish? */
bool
xfs_defer_has_unfinished_work(
struct xfs_defer_ops *dop)
/*
* Reset an already used dfops after finish.
*/
static void
xfs_defer_reset(
struct xfs_trans *tp)
{
return !list_empty(&dop->dop_pending) || !list_empty(&dop->dop_intake);
ASSERT(list_empty(&tp->t_dfops));
/*
* Low mode state transfers across transaction rolls to mirror dfops
* lifetime. Clear it now that dfops is reset.
*/
tp->t_flags &= ~XFS_TRANS_LOWMODE;
}
/*
* Add this inode to the deferred op. Each joined inode is relogged
* each time we roll the transaction.
* Free up any items left in the list.
*/
int
xfs_defer_ijoin(
struct xfs_defer_ops *dop,
struct xfs_inode *ip)
static void
xfs_defer_cancel_list(
struct xfs_mount *mp,
struct list_head *dop_list)
{
int i;
struct xfs_defer_pending *dfp;
struct xfs_defer_pending *pli;
struct list_head *pwi;
struct list_head *n;
for (i = 0; i < XFS_DEFER_OPS_NR_INODES; i++) {
if (dop->dop_inodes[i] == ip)
return 0;
else if (dop->dop_inodes[i] == NULL) {
dop->dop_inodes[i] = ip;
return 0;
/*
* Free the pending items. Caller should already have arranged
* for the intent items to be released.
*/
list_for_each_entry_safe(dfp, pli, dop_list, dfp_list) {
trace_xfs_defer_cancel_list(mp, dfp);
list_del(&dfp->dfp_list);
list_for_each_safe(pwi, n, &dfp->dfp_work) {
list_del(pwi);
dfp->dfp_count--;
dfp->dfp_type->cancel_item(pwi);
}
ASSERT(dfp->dfp_count == 0);
kmem_free(dfp);
}
ASSERT(0);
return -EFSCORRUPTED;
}
/*
* Add this buffer to the deferred op. Each joined buffer is relogged
* each time we roll the transaction.
*/
int
xfs_defer_bjoin(
struct xfs_defer_ops *dop,
struct xfs_buf *bp)
{
int i;
for (i = 0; i < XFS_DEFER_OPS_NR_BUFS; i++) {
if (dop->dop_bufs[i] == bp)
return 0;
else if (dop->dop_bufs[i] == NULL) {
dop->dop_bufs[i] = bp;
return 0;
}
}
ASSERT(0);
return -EFSCORRUPTED;
}
/*
@ -328,9 +342,8 @@ xfs_defer_bjoin(
* If an inode is provided, relog it to the new transaction.
*/
int
xfs_defer_finish(
struct xfs_trans **tp,
struct xfs_defer_ops *dop)
xfs_defer_finish_noroll(
struct xfs_trans **tp)
{
struct xfs_defer_pending *dfp;
struct list_head *li;
@ -338,35 +351,28 @@ xfs_defer_finish(
void *state;
int error = 0;
void (*cleanup_fn)(struct xfs_trans *, void *, int);
struct xfs_defer_ops *orig_dop;
LIST_HEAD(dop_pending);
ASSERT((*tp)->t_flags & XFS_TRANS_PERM_LOG_RES);
trace_xfs_defer_finish((*tp)->t_mountp, dop, _RET_IP_);
/*
* Attach dfops to the transaction during deferred ops processing. This
* explicitly causes calls into the allocator to defer AGFL block frees.
* Note that this code can go away once all dfops users attach to the
* associated tp.
*/
ASSERT(!(*tp)->t_agfl_dfops || ((*tp)->t_agfl_dfops == dop));
orig_dop = (*tp)->t_agfl_dfops;
(*tp)->t_agfl_dfops = dop;
trace_xfs_defer_finish(*tp, _RET_IP_);
/* Until we run out of pending work to finish... */
while (xfs_defer_has_unfinished_work(dop)) {
/* Log intents for work items sitting in the intake. */
xfs_defer_intake_work(*tp, dop);
while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
/* log intents and pull in intake items */
xfs_defer_create_intents(*tp);
list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
/* Roll the transaction. */
error = xfs_defer_trans_roll(tp, dop);
/*
* Roll the transaction.
*/
error = xfs_defer_trans_roll(tp);
if (error)
goto out;
/* Log an intent-done item for the first pending item. */
dfp = list_first_entry(&dop->dop_pending,
struct xfs_defer_pending, dfp_list);
dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
dfp_list);
trace_xfs_defer_pending_finish((*tp)->t_mountp, dfp);
dfp->dfp_done = dfp->dfp_type->create_done(*tp, dfp->dfp_intent,
dfp->dfp_count);
@ -377,7 +383,7 @@ xfs_defer_finish(
list_for_each_safe(li, n, &dfp->dfp_work) {
list_del(li);
dfp->dfp_count--;
error = dfp->dfp_type->finish_item(*tp, dop, li,
error = dfp->dfp_type->finish_item(*tp, li,
dfp->dfp_done, &state);
if (error == -EAGAIN) {
/*
@ -396,7 +402,6 @@ xfs_defer_finish(
*/
if (cleanup_fn)
cleanup_fn(*tp, state, error);
xfs_defer_trans_abort(*tp, dop, error);
goto out;
}
}
@ -425,72 +430,72 @@ xfs_defer_finish(
}
out:
(*tp)->t_agfl_dfops = orig_dop;
if (error)
trace_xfs_defer_finish_error((*tp)->t_mountp, dop, error);
else
trace_xfs_defer_finish_done((*tp)->t_mountp, dop, _RET_IP_);
return error;
if (error) {
xfs_defer_trans_abort(*tp, &dop_pending);
xfs_force_shutdown((*tp)->t_mountp, SHUTDOWN_CORRUPT_INCORE);
trace_xfs_defer_finish_error(*tp, error);
xfs_defer_cancel_list((*tp)->t_mountp, &dop_pending);
xfs_defer_cancel(*tp);
return error;
}
trace_xfs_defer_finish_done(*tp, _RET_IP_);
return 0;
}
/*
* Free up any items left in the list.
*/
void
xfs_defer_cancel(
struct xfs_defer_ops *dop)
int
xfs_defer_finish(
struct xfs_trans **tp)
{
struct xfs_defer_pending *dfp;
struct xfs_defer_pending *pli;
struct list_head *pwi;
struct list_head *n;
trace_xfs_defer_cancel(NULL, dop, _RET_IP_);
int error;
/*
* Free the pending items. Caller should already have arranged
* for the intent items to be released.
* Finish and roll the transaction once more to avoid returning to the
* caller with a dirty transaction.
*/
list_for_each_entry_safe(dfp, pli, &dop->dop_intake, dfp_list) {
trace_xfs_defer_intake_cancel(NULL, dfp);
list_del(&dfp->dfp_list);
list_for_each_safe(pwi, n, &dfp->dfp_work) {
list_del(pwi);
dfp->dfp_count--;
dfp->dfp_type->cancel_item(pwi);
error = xfs_defer_finish_noroll(tp);
if (error)
return error;
if ((*tp)->t_flags & XFS_TRANS_DIRTY) {
error = xfs_defer_trans_roll(tp);
if (error) {
xfs_force_shutdown((*tp)->t_mountp,
SHUTDOWN_CORRUPT_INCORE);
return error;
}
ASSERT(dfp->dfp_count == 0);
kmem_free(dfp);
}
list_for_each_entry_safe(dfp, pli, &dop->dop_pending, dfp_list) {
trace_xfs_defer_pending_cancel(NULL, dfp);
list_del(&dfp->dfp_list);
list_for_each_safe(pwi, n, &dfp->dfp_work) {
list_del(pwi);
dfp->dfp_count--;
dfp->dfp_type->cancel_item(pwi);
}
ASSERT(dfp->dfp_count == 0);
kmem_free(dfp);
}
xfs_defer_reset(*tp);
return 0;
}
void
xfs_defer_cancel(
struct xfs_trans *tp)
{
struct xfs_mount *mp = tp->t_mountp;
trace_xfs_defer_cancel(tp, _RET_IP_);
xfs_defer_cancel_list(mp, &tp->t_dfops);
}
/* Add an item for later deferred processing. */
void
xfs_defer_add(
struct xfs_defer_ops *dop,
struct xfs_trans *tp,
enum xfs_defer_ops_type type,
struct list_head *li)
{
struct xfs_defer_pending *dfp = NULL;
ASSERT(tp->t_flags & XFS_TRANS_PERM_LOG_RES);
/*
* Add the item to a pending item at the end of the intake list.
* If the last pending item has the same type, reuse it. Else,
* create a new pending item at the end of the intake list.
*/
if (!list_empty(&dop->dop_intake)) {
dfp = list_last_entry(&dop->dop_intake,
if (!list_empty(&tp->t_dfops)) {
dfp = list_last_entry(&tp->t_dfops,
struct xfs_defer_pending, dfp_list);
if (dfp->dfp_type->type != type ||
(dfp->dfp_type->max_items &&
@ -505,7 +510,7 @@ xfs_defer_add(
dfp->dfp_done = NULL;
dfp->dfp_count = 0;
INIT_LIST_HEAD(&dfp->dfp_work);
list_add_tail(&dfp->dfp_list, &dop->dop_intake);
list_add_tail(&dfp->dfp_list, &tp->t_dfops);
}
list_add_tail(li, &dfp->dfp_work);
@ -520,15 +525,25 @@ xfs_defer_init_op_type(
defer_op_types[type->type] = type;
}
/* Initialize a deferred operation. */
/*
* Move deferred ops from one transaction to another and reset the source to
* initial state. This is primarily used to carry state forward across
* transaction rolls with pending dfops.
*/
void
xfs_defer_init(
struct xfs_defer_ops *dop,
xfs_fsblock_t *fbp)
xfs_defer_move(
struct xfs_trans *dtp,
struct xfs_trans *stp)
{
memset(dop, 0, sizeof(struct xfs_defer_ops));
*fbp = NULLFSBLOCK;
INIT_LIST_HEAD(&dop->dop_intake);
INIT_LIST_HEAD(&dop->dop_pending);
trace_xfs_defer_init(NULL, dop, _RET_IP_);
list_splice_init(&stp->t_dfops, &dtp->t_dfops);
/*
* Low free space mode was historically controlled by a dfops field.
* This meant that low mode state potentially carried across multiple
* transaction rolls. Transfer low mode on a dfops move to preserve
* that behavior.
*/
dtp->t_flags |= (stp->t_flags & XFS_TRANS_LOWMODE);
xfs_defer_reset(stp);
}

Просмотреть файл

@ -24,17 +24,6 @@ struct xfs_defer_pending {
/*
* Header for deferred operation list.
*
* dop_low is used by the allocator to activate the lowspace algorithm -
* when free space is running low the extent allocator may choose to
* allocate an extent from an AG without leaving sufficient space for
* a btree split when inserting the new extent. In this case the allocator
* will enable the lowspace algorithm which is supposed to allow further
* allocations (such as btree splits and newroots) to allocate from
* sequential AGs. In order to avoid locking AGs out of order the lowspace
* algorithm will start searching for free space from AG 0. If the correct
* transaction reservations have been made then this algorithm will eventually
* find all the space it needs.
*/
enum xfs_defer_ops_type {
XFS_DEFER_OPS_TYPE_BMAP,
@ -45,28 +34,12 @@ enum xfs_defer_ops_type {
XFS_DEFER_OPS_TYPE_MAX,
};
#define XFS_DEFER_OPS_NR_INODES 2 /* join up to two inodes */
#define XFS_DEFER_OPS_NR_BUFS 2 /* join up to two buffers */
struct xfs_defer_ops {
bool dop_committed; /* did any trans commit? */
bool dop_low; /* alloc in low mode */
struct list_head dop_intake; /* unlogged pending work */
struct list_head dop_pending; /* logged pending work */
/* relog these with each roll */
struct xfs_inode *dop_inodes[XFS_DEFER_OPS_NR_INODES];
struct xfs_buf *dop_bufs[XFS_DEFER_OPS_NR_BUFS];
};
void xfs_defer_add(struct xfs_defer_ops *dop, enum xfs_defer_ops_type type,
void xfs_defer_add(struct xfs_trans *tp, enum xfs_defer_ops_type type,
struct list_head *h);
int xfs_defer_finish(struct xfs_trans **tp, struct xfs_defer_ops *dop);
void xfs_defer_cancel(struct xfs_defer_ops *dop);
void xfs_defer_init(struct xfs_defer_ops *dop, xfs_fsblock_t *fbp);
bool xfs_defer_has_unfinished_work(struct xfs_defer_ops *dop);
int xfs_defer_ijoin(struct xfs_defer_ops *dop, struct xfs_inode *ip);
int xfs_defer_bjoin(struct xfs_defer_ops *dop, struct xfs_buf *bp);
int xfs_defer_finish_noroll(struct xfs_trans **tp);
int xfs_defer_finish(struct xfs_trans **tp);
void xfs_defer_cancel(struct xfs_trans *);
void xfs_defer_move(struct xfs_trans *dtp, struct xfs_trans *stp);
/* Description of a deferred type. */
struct xfs_defer_op_type {
@ -74,8 +47,8 @@ struct xfs_defer_op_type {
unsigned int max_items;
void (*abort_intent)(void *);
void *(*create_done)(struct xfs_trans *, void *, unsigned int);
int (*finish_item)(struct xfs_trans *, struct xfs_defer_ops *,
struct list_head *, void *, void **);
int (*finish_item)(struct xfs_trans *, struct list_head *, void *,
void **);
void (*finish_cleanup)(struct xfs_trans *, void *, int);
void (*cancel_item)(struct list_head *);
int (*diff_items)(void *, struct list_head *, struct list_head *);

Просмотреть файл

@ -239,12 +239,10 @@ xfs_dir_init(
*/
int
xfs_dir_createname(
xfs_trans_t *tp,
xfs_inode_t *dp,
struct xfs_trans *tp,
struct xfs_inode *dp,
struct xfs_name *name,
xfs_ino_t inum, /* new entry inode number */
xfs_fsblock_t *first, /* bmap's firstblock */
struct xfs_defer_ops *dfops, /* bmap's freeblock list */
xfs_extlen_t total) /* bmap's total block count */
{
struct xfs_da_args *args;
@ -252,6 +250,7 @@ xfs_dir_createname(
int v; /* type-checking value */
ASSERT(S_ISDIR(VFS_I(dp)->i_mode));
if (inum) {
rval = xfs_dir_ino_validate(tp->t_mountp, inum);
if (rval)
@ -270,8 +269,6 @@ xfs_dir_createname(
args->hashval = dp->i_mount->m_dirnameops->hashname(name);
args->inumber = inum;
args->dp = dp;
args->firstblock = first;
args->dfops = dfops;
args->total = total;
args->whichfork = XFS_DATA_FORK;
args->trans = tp;
@ -416,17 +413,15 @@ out_free:
*/
int
xfs_dir_removename(
xfs_trans_t *tp,
xfs_inode_t *dp,
struct xfs_name *name,
xfs_ino_t ino,
xfs_fsblock_t *first, /* bmap's firstblock */
struct xfs_defer_ops *dfops, /* bmap's freeblock list */
xfs_extlen_t total) /* bmap's total block count */
struct xfs_trans *tp,
struct xfs_inode *dp,
struct xfs_name *name,
xfs_ino_t ino,
xfs_extlen_t total) /* bmap's total block count */
{
struct xfs_da_args *args;
int rval;
int v; /* type-checking value */
struct xfs_da_args *args;
int rval;
int v; /* type-checking value */
ASSERT(S_ISDIR(VFS_I(dp)->i_mode));
XFS_STATS_INC(dp->i_mount, xs_dir_remove);
@ -442,8 +437,6 @@ xfs_dir_removename(
args->hashval = dp->i_mount->m_dirnameops->hashname(name);
args->inumber = ino;
args->dp = dp;
args->firstblock = first;
args->dfops = dfops;
args->total = total;
args->whichfork = XFS_DATA_FORK;
args->trans = tp;
@ -478,17 +471,15 @@ out_free:
*/
int
xfs_dir_replace(
xfs_trans_t *tp,
xfs_inode_t *dp,
struct xfs_name *name, /* name of entry to replace */
xfs_ino_t inum, /* new inode number */
xfs_fsblock_t *first, /* bmap's firstblock */
struct xfs_defer_ops *dfops, /* bmap's freeblock list */
xfs_extlen_t total) /* bmap's total block count */
struct xfs_trans *tp,
struct xfs_inode *dp,
struct xfs_name *name, /* name of entry to replace */
xfs_ino_t inum, /* new inode number */
xfs_extlen_t total) /* bmap's total block count */
{
struct xfs_da_args *args;
int rval;
int v; /* type-checking value */
struct xfs_da_args *args;
int rval;
int v; /* type-checking value */
ASSERT(S_ISDIR(VFS_I(dp)->i_mode));
@ -507,8 +498,6 @@ xfs_dir_replace(
args->hashval = dp->i_mount->m_dirnameops->hashname(name);
args->inumber = inum;
args->dp = dp;
args->firstblock = first;
args->dfops = dfops;
args->total = total;
args->whichfork = XFS_DATA_FORK;
args->trans = tp;
@ -547,7 +536,7 @@ xfs_dir_canenter(
xfs_inode_t *dp,
struct xfs_name *name) /* name of entry to add */
{
return xfs_dir_createname(tp, dp, name, 0, NULL, NULL, 0);
return xfs_dir_createname(tp, dp, name, 0, 0);
}
/*
@ -645,17 +634,17 @@ xfs_dir2_isleaf(
*/
int
xfs_dir2_shrink_inode(
xfs_da_args_t *args,
xfs_dir2_db_t db,
struct xfs_buf *bp)
struct xfs_da_args *args,
xfs_dir2_db_t db,
struct xfs_buf *bp)
{
xfs_fileoff_t bno; /* directory file offset */
xfs_dablk_t da; /* directory file offset */
int done; /* bunmap is finished */
xfs_inode_t *dp;
int error;
xfs_mount_t *mp;
xfs_trans_t *tp;
xfs_fileoff_t bno; /* directory file offset */
xfs_dablk_t da; /* directory file offset */
int done; /* bunmap is finished */
struct xfs_inode *dp;
int error;
struct xfs_mount *mp;
struct xfs_trans *tp;
trace_xfs_dir2_shrink_inode(args, db);
@ -665,8 +654,7 @@ xfs_dir2_shrink_inode(
da = xfs_dir2_db_to_da(args->geo, db);
/* Unmap the fsblock(s). */
error = xfs_bunmapi(tp, dp, da, args->geo->fsbcount, 0, 0,
args->firstblock, args->dfops, &done);
error = xfs_bunmapi(tp, dp, da, args->geo->fsbcount, 0, 0, &done);
if (error) {
/*
* ENOSPC actually can happen if we're in a removename with no

Просмотреть файл

@ -9,7 +9,6 @@
#include "xfs_da_format.h"
#include "xfs_da_btree.h"
struct xfs_defer_ops;
struct xfs_da_args;
struct xfs_inode;
struct xfs_mount;
@ -118,19 +117,16 @@ extern int xfs_dir_init(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_inode *pdp);
extern int xfs_dir_createname(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name, xfs_ino_t inum,
xfs_fsblock_t *first,
struct xfs_defer_ops *dfops, xfs_extlen_t tot);
xfs_extlen_t tot);
extern int xfs_dir_lookup(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name, xfs_ino_t *inum,
struct xfs_name *ci_name);
extern int xfs_dir_removename(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name, xfs_ino_t ino,
xfs_fsblock_t *first,
struct xfs_defer_ops *dfops, xfs_extlen_t tot);
xfs_extlen_t tot);
extern int xfs_dir_replace(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name, xfs_ino_t inum,
xfs_fsblock_t *first,
struct xfs_defer_ops *dfops, xfs_extlen_t tot);
xfs_extlen_t tot);
extern int xfs_dir_canenter(struct xfs_trans *tp, struct xfs_inode *dp,
struct xfs_name *name);

Просмотреть файл

@ -1012,7 +1012,7 @@ xfs_dir2_leafn_rebalance(
int oldstale; /* old count of stale leaves */
#endif
int oldsum; /* old total leaf count */
int swap; /* swapped leaf blocks */
int swap_blocks; /* swapped leaf blocks */
struct xfs_dir2_leaf_entry *ents1;
struct xfs_dir2_leaf_entry *ents2;
struct xfs_dir3_icleaf_hdr hdr1;
@ -1023,13 +1023,10 @@ xfs_dir2_leafn_rebalance(
/*
* If the block order is wrong, swap the arguments.
*/
if ((swap = xfs_dir2_leafn_order(dp, blk1->bp, blk2->bp))) {
xfs_da_state_blk_t *tmp; /* temp for block swap */
swap_blocks = xfs_dir2_leafn_order(dp, blk1->bp, blk2->bp);
if (swap_blocks)
swap(blk1, blk2);
tmp = blk1;
blk1 = blk2;
blk2 = tmp;
}
leaf1 = blk1->bp->b_addr;
leaf2 = blk2->bp->b_addr;
dp->d_ops->leaf_hdr_from_disk(&hdr1, leaf1);
@ -1093,11 +1090,11 @@ xfs_dir2_leafn_rebalance(
* Mark whether we're inserting into the old or new leaf.
*/
if (hdr1.count < hdr2.count)
state->inleaf = swap;
state->inleaf = swap_blocks;
else if (hdr1.count > hdr2.count)
state->inleaf = !swap;
state->inleaf = !swap_blocks;
else
state->inleaf = swap ^ (blk1->index <= hdr1.count);
state->inleaf = swap_blocks ^ (blk1->index <= hdr1.count);
/*
* Adjust the expected index for insertion.
*/

Просмотреть файл

@ -53,7 +53,8 @@
#define XFS_ERRTAG_LOG_ITEM_PIN 30
#define XFS_ERRTAG_BUF_LRU_REF 31
#define XFS_ERRTAG_FORCE_SCRUB_REPAIR 32
#define XFS_ERRTAG_MAX 33
#define XFS_ERRTAG_FORCE_SUMMARY_RECALC 33
#define XFS_ERRTAG_MAX 34
/*
* Random factors for above tags, 1 means always, 2 means 1/2 time, etc.
@ -91,5 +92,6 @@
#define XFS_RANDOM_LOG_ITEM_PIN 1
#define XFS_RANDOM_BUF_LRU_REF 2
#define XFS_RANDOM_FORCE_SCRUB_REPAIR 1
#define XFS_RANDOM_FORCE_SUMMARY_RECALC 1
#endif /* __XFS_ERRORTAG_H_ */

Просмотреть файл

@ -1838,23 +1838,24 @@ out_error:
*/
STATIC void
xfs_difree_inode_chunk(
struct xfs_mount *mp,
struct xfs_trans *tp,
xfs_agnumber_t agno,
struct xfs_inobt_rec_incore *rec,
struct xfs_defer_ops *dfops)
struct xfs_inobt_rec_incore *rec)
{
xfs_agblock_t sagbno = XFS_AGINO_TO_AGBNO(mp, rec->ir_startino);
int startidx, endidx;
int nextbit;
xfs_agblock_t agbno;
int contigblk;
struct xfs_owner_info oinfo;
struct xfs_mount *mp = tp->t_mountp;
xfs_agblock_t sagbno = XFS_AGINO_TO_AGBNO(mp,
rec->ir_startino);
int startidx, endidx;
int nextbit;
xfs_agblock_t agbno;
int contigblk;
struct xfs_owner_info oinfo;
DECLARE_BITMAP(holemask, XFS_INOBT_HOLEMASK_BITS);
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES);
if (!xfs_inobt_issparse(rec->ir_holemask)) {
/* not sparse, calculate extent info directly */
xfs_bmap_add_free(mp, dfops, XFS_AGB_TO_FSB(mp, agno, sagbno),
xfs_bmap_add_free(tp, XFS_AGB_TO_FSB(mp, agno, sagbno),
mp->m_ialloc_blks, &oinfo);
return;
}
@ -1898,7 +1899,7 @@ xfs_difree_inode_chunk(
ASSERT(agbno % mp->m_sb.sb_spino_align == 0);
ASSERT(contigblk % mp->m_sb.sb_spino_align == 0);
xfs_bmap_add_free(mp, dfops, XFS_AGB_TO_FSB(mp, agno, agbno),
xfs_bmap_add_free(tp, XFS_AGB_TO_FSB(mp, agno, agbno),
contigblk, &oinfo);
/* reset range to current bit and carry on... */
@ -1915,7 +1916,6 @@ xfs_difree_inobt(
struct xfs_trans *tp,
struct xfs_buf *agbp,
xfs_agino_t agino,
struct xfs_defer_ops *dfops,
struct xfs_icluster *xic,
struct xfs_inobt_rec_incore *orec)
{
@ -2003,7 +2003,7 @@ xfs_difree_inobt(
goto error0;
}
xfs_difree_inode_chunk(mp, agno, &rec, dfops);
xfs_difree_inode_chunk(tp, agno, &rec);
} else {
xic->deleted = false;
@ -2148,7 +2148,6 @@ int
xfs_difree(
struct xfs_trans *tp, /* transaction pointer */
xfs_ino_t inode, /* inode to be freed */
struct xfs_defer_ops *dfops, /* extents to free */
struct xfs_icluster *xic) /* cluster info if deleted */
{
/* REFERENCED */
@ -2200,7 +2199,7 @@ xfs_difree(
/*
* Fix up the inode allocation btree.
*/
error = xfs_difree_inobt(mp, tp, agbp, agino, dfops, xic, &rec);
error = xfs_difree_inobt(mp, tp, agbp, agino, xic, &rec);
if (error)
goto error0;
@ -2260,7 +2259,7 @@ xfs_imap_lookup(
}
xfs_trans_brelse(tp, agbp);
xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_btree_del_cursor(cur, error);
if (error)
return error;
@ -2539,7 +2538,7 @@ xfs_agi_verify(
return __this_address;
for (i = 0; i < XFS_AGI_UNLINKED_BUCKETS; i++) {
if (agi->agi_unlinked[i] == NULLAGINO)
if (agi->agi_unlinked[i] == cpu_to_be32(NULLAGINO))
continue;
if (!xfs_verify_ino(mp, be32_to_cpu(agi->agi_unlinked[i])))
return __this_address;

Просмотреть файл

@ -82,7 +82,6 @@ int /* error */
xfs_difree(
struct xfs_trans *tp, /* transaction pointer */
xfs_ino_t inode, /* inode to be freed */
struct xfs_defer_ops *dfops, /* extents to free */
struct xfs_icluster *ifree); /* cluster info if deleted */
/*

Просмотреть файл

@ -552,6 +552,7 @@ xfs_inobt_max_size(
static int
xfs_inobt_count_blocks(
struct xfs_mount *mp,
struct xfs_trans *tp,
xfs_agnumber_t agno,
xfs_btnum_t btnum,
xfs_extlen_t *tree_blocks)
@ -560,14 +561,14 @@ xfs_inobt_count_blocks(
struct xfs_btree_cur *cur;
int error;
error = xfs_ialloc_read_agi(mp, NULL, agno, &agbp);
error = xfs_ialloc_read_agi(mp, tp, agno, &agbp);
if (error)
return error;
cur = xfs_inobt_init_cursor(mp, NULL, agbp, agno, btnum);
cur = xfs_inobt_init_cursor(mp, tp, agbp, agno, btnum);
error = xfs_btree_count_blocks(cur, tree_blocks);
xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_buf_relse(agbp);
xfs_btree_del_cursor(cur, error);
xfs_trans_brelse(tp, agbp);
return error;
}
@ -578,6 +579,7 @@ xfs_inobt_count_blocks(
int
xfs_finobt_calc_reserves(
struct xfs_mount *mp,
struct xfs_trans *tp,
xfs_agnumber_t agno,
xfs_extlen_t *ask,
xfs_extlen_t *used)
@ -588,7 +590,7 @@ xfs_finobt_calc_reserves(
if (!xfs_sb_version_hasfinobt(&mp->m_sb))
return 0;
error = xfs_inobt_count_blocks(mp, agno, XFS_BTNUM_FINO, &tree_len);
error = xfs_inobt_count_blocks(mp, tp, agno, XFS_BTNUM_FINO, &tree_len);
if (error)
return error;

Просмотреть файл

@ -60,8 +60,8 @@ int xfs_inobt_rec_check_count(struct xfs_mount *,
#define xfs_inobt_rec_check_count(mp, rec) 0
#endif /* DEBUG */
int xfs_finobt_calc_reserves(struct xfs_mount *mp, xfs_agnumber_t agno,
xfs_extlen_t *ask, xfs_extlen_t *used);
int xfs_finobt_calc_reserves(struct xfs_mount *mp, struct xfs_trans *tp,
xfs_agnumber_t agno, xfs_extlen_t *ask, xfs_extlen_t *used);
extern xfs_extlen_t xfs_iallocbt_calc_size(struct xfs_mount *mp,
unsigned long long len);

Просмотреть файл

@ -14,6 +14,7 @@
#include "xfs_inode_fork.h"
#include "xfs_trans_resv.h"
#include "xfs_mount.h"
#include "xfs_bmap.h"
#include "xfs_trace.h"
/*
@ -612,6 +613,19 @@ xfs_iext_realloc_root(
cur->leaf = new;
}
/*
* Increment the sequence counter if we are on a COW fork. This allows
* the writeback code to skip looking for a COW extent if the COW fork
* hasn't changed. We use WRITE_ONCE here to ensure the update to the
* sequence counter is seen before the modifications to the extent
* tree itself take effect.
*/
static inline void xfs_iext_inc_seq(struct xfs_ifork *ifp, int state)
{
if (state & BMAP_COWFORK)
WRITE_ONCE(ifp->if_seq, READ_ONCE(ifp->if_seq) + 1);
}
void
xfs_iext_insert(
struct xfs_inode *ip,
@ -624,6 +638,8 @@ xfs_iext_insert(
struct xfs_iext_leaf *new = NULL;
int nr_entries, i;
xfs_iext_inc_seq(ifp, state);
if (ifp->if_height == 0)
xfs_iext_alloc_root(ifp, cur);
else if (ifp->if_height == 1)
@ -864,6 +880,8 @@ xfs_iext_remove(
ASSERT(ifp->if_u1.if_root != NULL);
ASSERT(xfs_iext_valid(ifp, cur));
xfs_iext_inc_seq(ifp, state);
nr_entries = xfs_iext_leaf_nr_entries(ifp, leaf, cur->pos) - 1;
for (i = cur->pos; i < nr_entries; i++)
leaf->recs[i] = leaf->recs[i + 1];
@ -970,6 +988,8 @@ xfs_iext_update_extent(
{
struct xfs_ifork *ifp = xfs_iext_state_to_fork(ip, state);
xfs_iext_inc_seq(ifp, state);
if (cur->pos == 0) {
struct xfs_bmbt_irec old;

Просмотреть файл

@ -158,7 +158,6 @@ xfs_init_local_fork(
}
ifp->if_bytes = size;
ifp->if_real_bytes = real_size;
ifp->if_flags &= ~(XFS_IFEXTENTS | XFS_IFBROOT);
ifp->if_flags |= XFS_IFINLINE;
}
@ -226,7 +225,6 @@ xfs_iformat_extents(
return -EFSCORRUPTED;
}
ifp->if_real_bytes = 0;
ifp->if_bytes = 0;
ifp->if_u1.if_root = NULL;
ifp->if_height = 0;
@ -271,7 +269,7 @@ xfs_iformat_btree(
{
struct xfs_mount *mp = ip->i_mount;
xfs_bmdr_block_t *dfp;
xfs_ifork_t *ifp;
struct xfs_ifork *ifp;
/* REFERENCED */
int nrecs;
int size;
@ -317,7 +315,6 @@ xfs_iformat_btree(
ifp->if_flags &= ~XFS_IFEXTENTS;
ifp->if_flags |= XFS_IFBROOT;
ifp->if_real_bytes = 0;
ifp->if_bytes = 0;
ifp->if_u1.if_root = NULL;
ifp->if_height = 0;
@ -350,7 +347,7 @@ xfs_iroot_realloc(
{
struct xfs_mount *mp = ip->i_mount;
int cur_max;
xfs_ifork_t *ifp;
struct xfs_ifork *ifp;
struct xfs_btree_block *new_broot;
int new_max;
size_t new_size;
@ -471,55 +468,34 @@ xfs_iroot_realloc(
*/
void
xfs_idata_realloc(
xfs_inode_t *ip,
int byte_diff,
int whichfork)
struct xfs_inode *ip,
int byte_diff,
int whichfork)
{
xfs_ifork_t *ifp;
int new_size;
int real_size;
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
int new_size = (int)ifp->if_bytes + byte_diff;
if (byte_diff == 0) {
return;
}
ifp = XFS_IFORK_PTR(ip, whichfork);
new_size = (int)ifp->if_bytes + byte_diff;
ASSERT(new_size >= 0);
ASSERT(new_size <= XFS_IFORK_SIZE(ip, whichfork));
if (byte_diff == 0)
return;
if (new_size == 0) {
kmem_free(ifp->if_u1.if_data);
ifp->if_u1.if_data = NULL;
real_size = 0;
} else {
/*
* Stuck with malloc/realloc.
* For inline data, the underlying buffer must be
* a multiple of 4 bytes in size so that it can be
* logged and stay on word boundaries. We enforce
* that here.
*/
real_size = roundup(new_size, 4);
if (ifp->if_u1.if_data == NULL) {
ASSERT(ifp->if_real_bytes == 0);
ifp->if_u1.if_data = kmem_alloc(real_size,
KM_SLEEP | KM_NOFS);
} else {
/*
* Only do the realloc if the underlying size
* is really changing.
*/
if (ifp->if_real_bytes != real_size) {
ifp->if_u1.if_data =
kmem_realloc(ifp->if_u1.if_data,
real_size,
KM_SLEEP | KM_NOFS);
}
}
ifp->if_bytes = 0;
return;
}
ifp->if_real_bytes = real_size;
/*
* For inline data, the underlying buffer must be a multiple of 4 bytes
* in size so that it can be logged and stay on word boundaries.
* We enforce that here.
*/
ifp->if_u1.if_data = kmem_realloc(ifp->if_u1.if_data,
roundup(new_size, 4), KM_SLEEP | KM_NOFS);
ifp->if_bytes = new_size;
ASSERT(ifp->if_bytes <= XFS_IFORK_SIZE(ip, whichfork));
}
void
@ -527,7 +503,7 @@ xfs_idestroy_fork(
xfs_inode_t *ip,
int whichfork)
{
xfs_ifork_t *ifp;
struct xfs_ifork *ifp;
ifp = XFS_IFORK_PTR(ip, whichfork);
if (ifp->if_broot != NULL) {
@ -543,17 +519,13 @@ xfs_idestroy_fork(
*/
if (XFS_IFORK_FORMAT(ip, whichfork) == XFS_DINODE_FMT_LOCAL) {
if (ifp->if_u1.if_data != NULL) {
ASSERT(ifp->if_real_bytes != 0);
kmem_free(ifp->if_u1.if_data);
ifp->if_u1.if_data = NULL;
ifp->if_real_bytes = 0;
}
} else if ((ifp->if_flags & XFS_IFEXTENTS) && ifp->if_height) {
xfs_iext_destroy(ifp);
}
ASSERT(ifp->if_real_bytes == 0);
if (whichfork == XFS_ATTR_FORK) {
kmem_zone_free(xfs_ifork_zone, ip->i_afp);
ip->i_afp = NULL;
@ -620,7 +592,7 @@ xfs_iflush_fork(
int whichfork)
{
char *cp;
xfs_ifork_t *ifp;
struct xfs_ifork *ifp;
xfs_mount_t *mp;
static const short brootflag[2] =
{ XFS_ILOG_DBROOT, XFS_ILOG_ABROOT };

Просмотреть файл

@ -12,9 +12,9 @@ struct xfs_dinode;
/*
* File incore extent information, present for each of data & attr forks.
*/
typedef struct xfs_ifork {
struct xfs_ifork {
int if_bytes; /* bytes in if_u1 */
int if_real_bytes; /* bytes allocated in if_u1 */
unsigned int if_seq; /* cow fork mod counter */
struct xfs_btree_block *if_broot; /* file's incore btree root */
short if_broot_bytes; /* bytes allocated for root */
unsigned char if_flags; /* per-fork flags */
@ -23,7 +23,7 @@ typedef struct xfs_ifork {
void *if_root; /* extent tree root */
char *if_data; /* inline file data */
} if_u1;
} xfs_ifork_t;
};
/*
* Per-fork incore inode flags.

Просмотреть файл

@ -77,6 +77,19 @@ static inline uint xlog_get_cycle(char *ptr)
#define XLOG_UNMOUNT_TYPE 0x556e /* Un for Unmount */
/*
* Log item for unmount records.
*
* The unmount record used to have a string "Unmount filesystem--" in the
* data section where the "Un" was really a magic number (XLOG_UNMOUNT_TYPE).
* We just write the magic number now; see xfs_log_unmount_write.
*/
struct xfs_unmount_log_format {
uint16_t magic; /* XLOG_UNMOUNT_TYPE */
uint16_t pad1;
uint32_t pad2; /* may as well make it 64 bits */
};
/* Region types for iovec's i_type */
#define XLOG_REG_TYPE_BFORMAT 1
#define XLOG_REG_TYPE_BCHUNK 2

Просмотреть файл

@ -34,11 +34,9 @@ enum xfs_refc_adjust_op {
};
STATIC int __xfs_refcount_cow_alloc(struct xfs_btree_cur *rcur,
xfs_agblock_t agbno, xfs_extlen_t aglen,
struct xfs_defer_ops *dfops);
xfs_agblock_t agbno, xfs_extlen_t aglen);
STATIC int __xfs_refcount_cow_free(struct xfs_btree_cur *rcur,
xfs_agblock_t agbno, xfs_extlen_t aglen,
struct xfs_defer_ops *dfops);
xfs_agblock_t agbno, xfs_extlen_t aglen);
/*
* Look up the first record less than or equal to [bno, len] in the btree
@ -870,7 +868,6 @@ xfs_refcount_adjust_extents(
xfs_agblock_t *agbno,
xfs_extlen_t *aglen,
enum xfs_refc_adjust_op adj,
struct xfs_defer_ops *dfops,
struct xfs_owner_info *oinfo)
{
struct xfs_refcount_irec ext, tmp;
@ -925,8 +922,8 @@ xfs_refcount_adjust_extents(
fsbno = XFS_AGB_TO_FSB(cur->bc_mp,
cur->bc_private.a.agno,
tmp.rc_startblock);
xfs_bmap_add_free(cur->bc_mp, dfops, fsbno,
tmp.rc_blockcount, oinfo);
xfs_bmap_add_free(cur->bc_tp, fsbno,
tmp.rc_blockcount, oinfo);
}
(*agbno) += tmp.rc_blockcount;
@ -968,8 +965,8 @@ xfs_refcount_adjust_extents(
fsbno = XFS_AGB_TO_FSB(cur->bc_mp,
cur->bc_private.a.agno,
ext.rc_startblock);
xfs_bmap_add_free(cur->bc_mp, dfops, fsbno,
ext.rc_blockcount, oinfo);
xfs_bmap_add_free(cur->bc_tp, fsbno, ext.rc_blockcount,
oinfo);
}
skip:
@ -998,7 +995,6 @@ xfs_refcount_adjust(
xfs_agblock_t *new_agbno,
xfs_extlen_t *new_aglen,
enum xfs_refc_adjust_op adj,
struct xfs_defer_ops *dfops,
struct xfs_owner_info *oinfo)
{
bool shape_changed;
@ -1043,7 +1039,7 @@ xfs_refcount_adjust(
/* Now that we've taken care of the ends, adjust the middle extents */
error = xfs_refcount_adjust_extents(cur, new_agbno, new_aglen,
adj, dfops, oinfo);
adj, oinfo);
if (error)
goto out_error;
@ -1067,7 +1063,7 @@ xfs_refcount_finish_one_cleanup(
if (rcur == NULL)
return;
agbp = rcur->bc_private.a.agbp;
xfs_btree_del_cursor(rcur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_btree_del_cursor(rcur, error);
if (error)
xfs_trans_brelse(tp, agbp);
}
@ -1082,7 +1078,6 @@ xfs_refcount_finish_one_cleanup(
int
xfs_refcount_finish_one(
struct xfs_trans *tp,
struct xfs_defer_ops *dfops,
enum xfs_refcount_intent_type type,
xfs_fsblock_t startblock,
xfs_extlen_t blockcount,
@ -1132,7 +1127,7 @@ xfs_refcount_finish_one(
if (!agbp)
return -EFSCORRUPTED;
rcur = xfs_refcountbt_init_cursor(mp, tp, agbp, agno, dfops);
rcur = xfs_refcountbt_init_cursor(mp, tp, agbp, agno);
if (!rcur) {
error = -ENOMEM;
goto out_cur;
@ -1145,23 +1140,23 @@ xfs_refcount_finish_one(
switch (type) {
case XFS_REFCOUNT_INCREASE:
error = xfs_refcount_adjust(rcur, bno, blockcount, &new_agbno,
new_len, XFS_REFCOUNT_ADJUST_INCREASE, dfops, NULL);
new_len, XFS_REFCOUNT_ADJUST_INCREASE, NULL);
*new_fsb = XFS_AGB_TO_FSB(mp, agno, new_agbno);
break;
case XFS_REFCOUNT_DECREASE:
error = xfs_refcount_adjust(rcur, bno, blockcount, &new_agbno,
new_len, XFS_REFCOUNT_ADJUST_DECREASE, dfops, NULL);
new_len, XFS_REFCOUNT_ADJUST_DECREASE, NULL);
*new_fsb = XFS_AGB_TO_FSB(mp, agno, new_agbno);
break;
case XFS_REFCOUNT_ALLOC_COW:
*new_fsb = startblock + blockcount;
*new_len = 0;
error = __xfs_refcount_cow_alloc(rcur, bno, blockcount, dfops);
error = __xfs_refcount_cow_alloc(rcur, bno, blockcount);
break;
case XFS_REFCOUNT_FREE_COW:
*new_fsb = startblock + blockcount;
*new_len = 0;
error = __xfs_refcount_cow_free(rcur, bno, blockcount, dfops);
error = __xfs_refcount_cow_free(rcur, bno, blockcount);
break;
default:
ASSERT(0);
@ -1183,16 +1178,16 @@ out_cur:
*/
static int
__xfs_refcount_add(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
enum xfs_refcount_intent_type type,
xfs_fsblock_t startblock,
xfs_extlen_t blockcount)
{
struct xfs_refcount_intent *ri;
trace_xfs_refcount_defer(mp, XFS_FSB_TO_AGNO(mp, startblock),
type, XFS_FSB_TO_AGBNO(mp, startblock),
trace_xfs_refcount_defer(tp->t_mountp,
XFS_FSB_TO_AGNO(tp->t_mountp, startblock),
type, XFS_FSB_TO_AGBNO(tp->t_mountp, startblock),
blockcount);
ri = kmem_alloc(sizeof(struct xfs_refcount_intent),
@ -1202,7 +1197,7 @@ __xfs_refcount_add(
ri->ri_startblock = startblock;
ri->ri_blockcount = blockcount;
xfs_defer_add(dfops, XFS_DEFER_OPS_TYPE_REFCOUNT, &ri->ri_list);
xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_REFCOUNT, &ri->ri_list);
return 0;
}
@ -1211,14 +1206,13 @@ __xfs_refcount_add(
*/
int
xfs_refcount_increase_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
struct xfs_bmbt_irec *PREV)
{
if (!xfs_sb_version_hasreflink(&mp->m_sb))
if (!xfs_sb_version_hasreflink(&tp->t_mountp->m_sb))
return 0;
return __xfs_refcount_add(mp, dfops, XFS_REFCOUNT_INCREASE,
return __xfs_refcount_add(tp, XFS_REFCOUNT_INCREASE,
PREV->br_startblock, PREV->br_blockcount);
}
@ -1227,14 +1221,13 @@ xfs_refcount_increase_extent(
*/
int
xfs_refcount_decrease_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
struct xfs_bmbt_irec *PREV)
{
if (!xfs_sb_version_hasreflink(&mp->m_sb))
if (!xfs_sb_version_hasreflink(&tp->t_mountp->m_sb))
return 0;
return __xfs_refcount_add(mp, dfops, XFS_REFCOUNT_DECREASE,
return __xfs_refcount_add(tp, XFS_REFCOUNT_DECREASE,
PREV->br_startblock, PREV->br_blockcount);
}
@ -1522,8 +1515,7 @@ STATIC int
__xfs_refcount_cow_alloc(
struct xfs_btree_cur *rcur,
xfs_agblock_t agbno,
xfs_extlen_t aglen,
struct xfs_defer_ops *dfops)
xfs_extlen_t aglen)
{
trace_xfs_refcount_cow_increase(rcur->bc_mp, rcur->bc_private.a.agno,
agbno, aglen);
@ -1540,8 +1532,7 @@ STATIC int
__xfs_refcount_cow_free(
struct xfs_btree_cur *rcur,
xfs_agblock_t agbno,
xfs_extlen_t aglen,
struct xfs_defer_ops *dfops)
xfs_extlen_t aglen)
{
trace_xfs_refcount_cow_decrease(rcur->bc_mp, rcur->bc_private.a.agno,
agbno, aglen);
@ -1554,47 +1545,45 @@ __xfs_refcount_cow_free(
/* Record a CoW staging extent in the refcount btree. */
int
xfs_refcount_alloc_cow_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
xfs_fsblock_t fsb,
xfs_extlen_t len)
{
struct xfs_mount *mp = tp->t_mountp;
int error;
if (!xfs_sb_version_hasreflink(&mp->m_sb))
return 0;
error = __xfs_refcount_add(mp, dfops, XFS_REFCOUNT_ALLOC_COW,
fsb, len);
error = __xfs_refcount_add(tp, XFS_REFCOUNT_ALLOC_COW, fsb, len);
if (error)
return error;
/* Add rmap entry */
return xfs_rmap_alloc_extent(mp, dfops, XFS_FSB_TO_AGNO(mp, fsb),
return xfs_rmap_alloc_extent(tp, XFS_FSB_TO_AGNO(mp, fsb),
XFS_FSB_TO_AGBNO(mp, fsb), len, XFS_RMAP_OWN_COW);
}
/* Forget a CoW staging event in the refcount btree. */
int
xfs_refcount_free_cow_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
xfs_fsblock_t fsb,
xfs_extlen_t len)
{
struct xfs_mount *mp = tp->t_mountp;
int error;
if (!xfs_sb_version_hasreflink(&mp->m_sb))
return 0;
/* Remove rmap entry */
error = xfs_rmap_free_extent(mp, dfops, XFS_FSB_TO_AGNO(mp, fsb),
error = xfs_rmap_free_extent(tp, XFS_FSB_TO_AGNO(mp, fsb),
XFS_FSB_TO_AGBNO(mp, fsb), len, XFS_RMAP_OWN_COW);
if (error)
return error;
return __xfs_refcount_add(mp, dfops, XFS_REFCOUNT_FREE_COW,
fsb, len);
return __xfs_refcount_add(tp, XFS_REFCOUNT_FREE_COW, fsb, len);
}
struct xfs_refcount_recovery {
@ -1635,7 +1624,6 @@ xfs_refcount_recover_cow_leftovers(
struct list_head debris;
union xfs_btree_irec low;
union xfs_btree_irec high;
struct xfs_defer_ops dfops;
xfs_fsblock_t fsb;
xfs_agblock_t agbno;
int error;
@ -1666,7 +1654,7 @@ xfs_refcount_recover_cow_leftovers(
error = -ENOMEM;
goto out_trans;
}
cur = xfs_refcountbt_init_cursor(mp, tp, agbp, agno, NULL);
cur = xfs_refcountbt_init_cursor(mp, tp, agbp, agno);
/* Find all the leftover CoW staging extents. */
memset(&low, 0, sizeof(low));
@ -1675,11 +1663,11 @@ xfs_refcount_recover_cow_leftovers(
high.rc.rc_startblock = -1U;
error = xfs_btree_query_range(cur, &low, &high,
xfs_refcount_recover_extent, &debris);
if (error)
goto out_cursor;
xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR);
xfs_btree_del_cursor(cur, error);
xfs_trans_brelse(tp, agbp);
xfs_trans_cancel(tp);
if (error)
goto out_free;
/* Now iterate the list to free the leftovers */
list_for_each_entry_safe(rr, n, &debris, rr_list) {
@ -1691,21 +1679,15 @@ xfs_refcount_recover_cow_leftovers(
trace_xfs_refcount_recover_extent(mp, agno, &rr->rr_rrec);
/* Free the orphan record */
xfs_defer_init(&dfops, &fsb);
agbno = rr->rr_rrec.rc_startblock - XFS_REFC_COW_START;
fsb = XFS_AGB_TO_FSB(mp, agno, agbno);
error = xfs_refcount_free_cow_extent(mp, &dfops, fsb,
error = xfs_refcount_free_cow_extent(tp, fsb,
rr->rr_rrec.rc_blockcount);
if (error)
goto out_defer;
goto out_trans;
/* Free the block. */
xfs_bmap_add_free(mp, &dfops, fsb,
rr->rr_rrec.rc_blockcount, NULL);
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto out_defer;
xfs_bmap_add_free(tp, fsb, rr->rr_rrec.rc_blockcount, NULL);
error = xfs_trans_commit(tp);
if (error)
@ -1716,8 +1698,6 @@ xfs_refcount_recover_cow_leftovers(
}
return error;
out_defer:
xfs_defer_cancel(&dfops);
out_trans:
xfs_trans_cancel(tp);
out_free:
@ -1727,11 +1707,6 @@ out_free:
kmem_free(rr);
}
return error;
out_cursor:
xfs_btree_del_cursor(cur, XFS_BTREE_ERROR);
xfs_trans_brelse(tp, agbp);
goto out_trans;
}
/* Is there a record covering a given extent? */

Просмотреть файл

@ -29,29 +29,26 @@ struct xfs_refcount_intent {
xfs_extlen_t ri_blockcount;
};
extern int xfs_refcount_increase_extent(struct xfs_mount *mp,
struct xfs_defer_ops *dfops, struct xfs_bmbt_irec *irec);
extern int xfs_refcount_decrease_extent(struct xfs_mount *mp,
struct xfs_defer_ops *dfops, struct xfs_bmbt_irec *irec);
extern int xfs_refcount_increase_extent(struct xfs_trans *tp,
struct xfs_bmbt_irec *irec);
extern int xfs_refcount_decrease_extent(struct xfs_trans *tp,
struct xfs_bmbt_irec *irec);
extern void xfs_refcount_finish_one_cleanup(struct xfs_trans *tp,
struct xfs_btree_cur *rcur, int error);
extern int xfs_refcount_finish_one(struct xfs_trans *tp,
struct xfs_defer_ops *dfops, enum xfs_refcount_intent_type type,
xfs_fsblock_t startblock, xfs_extlen_t blockcount,
xfs_fsblock_t *new_fsb, xfs_extlen_t *new_len,
struct xfs_btree_cur **pcur);
enum xfs_refcount_intent_type type, xfs_fsblock_t startblock,
xfs_extlen_t blockcount, xfs_fsblock_t *new_fsb,
xfs_extlen_t *new_len, struct xfs_btree_cur **pcur);
extern int xfs_refcount_find_shared(struct xfs_btree_cur *cur,
xfs_agblock_t agbno, xfs_extlen_t aglen, xfs_agblock_t *fbno,
xfs_extlen_t *flen, bool find_end_of_shared);
extern int xfs_refcount_alloc_cow_extent(struct xfs_mount *mp,
struct xfs_defer_ops *dfops, xfs_fsblock_t fsb,
xfs_extlen_t len);
extern int xfs_refcount_free_cow_extent(struct xfs_mount *mp,
struct xfs_defer_ops *dfops, xfs_fsblock_t fsb,
xfs_extlen_t len);
extern int xfs_refcount_alloc_cow_extent(struct xfs_trans *tp,
xfs_fsblock_t fsb, xfs_extlen_t len);
extern int xfs_refcount_free_cow_extent(struct xfs_trans *tp,
xfs_fsblock_t fsb, xfs_extlen_t len);
extern int xfs_refcount_recover_cow_leftovers(struct xfs_mount *mp,
xfs_agnumber_t agno);

Просмотреть файл

@ -27,8 +27,7 @@ xfs_refcountbt_dup_cursor(
struct xfs_btree_cur *cur)
{
return xfs_refcountbt_init_cursor(cur->bc_mp, cur->bc_tp,
cur->bc_private.a.agbp, cur->bc_private.a.agno,
cur->bc_private.a.dfops);
cur->bc_private.a.agbp, cur->bc_private.a.agno);
}
STATIC void
@ -71,7 +70,6 @@ xfs_refcountbt_alloc_block(
args.type = XFS_ALLOCTYPE_NEAR_BNO;
args.fsbno = XFS_AGB_TO_FSB(cur->bc_mp, cur->bc_private.a.agno,
xfs_refc_block(args.mp));
args.firstblock = args.fsbno;
xfs_rmap_ag_owner(&args.oinfo, XFS_RMAP_OWN_REFC);
args.minlen = args.maxlen = args.prod = 1;
args.resv = XFS_AG_RESV_METADATA;
@ -323,8 +321,7 @@ xfs_refcountbt_init_cursor(
struct xfs_mount *mp,
struct xfs_trans *tp,
struct xfs_buf *agbp,
xfs_agnumber_t agno,
struct xfs_defer_ops *dfops)
xfs_agnumber_t agno)
{
struct xfs_agf *agf = XFS_BUF_TO_AGF(agbp);
struct xfs_btree_cur *cur;
@ -344,7 +341,6 @@ xfs_refcountbt_init_cursor(
cur->bc_private.a.agbp = agbp;
cur->bc_private.a.agno = agno;
cur->bc_private.a.dfops = dfops;
cur->bc_flags |= XFS_BTREE_CRC_BLOCKS;
cur->bc_private.a.priv.refc.nr_ops = 0;
@ -408,6 +404,7 @@ xfs_refcountbt_max_size(
int
xfs_refcountbt_calc_reserves(
struct xfs_mount *mp,
struct xfs_trans *tp,
xfs_agnumber_t agno,
xfs_extlen_t *ask,
xfs_extlen_t *used)
@ -422,14 +419,14 @@ xfs_refcountbt_calc_reserves(
return 0;
error = xfs_alloc_read_agf(mp, NULL, agno, 0, &agbp);
error = xfs_alloc_read_agf(mp, tp, agno, 0, &agbp);
if (error)
return error;
agf = XFS_BUF_TO_AGF(agbp);
agblocks = be32_to_cpu(agf->agf_length);
tree_len = be32_to_cpu(agf->agf_refcount_blocks);
xfs_buf_relse(agbp);
xfs_trans_brelse(tp, agbp);
*ask += xfs_refcountbt_max_size(mp, agblocks);
*used += tree_len;

Просмотреть файл

@ -44,8 +44,8 @@ struct xfs_mount;
((index) - 1) * sizeof(xfs_refcount_ptr_t)))
extern struct xfs_btree_cur *xfs_refcountbt_init_cursor(struct xfs_mount *mp,
struct xfs_trans *tp, struct xfs_buf *agbp, xfs_agnumber_t agno,
struct xfs_defer_ops *dfops);
struct xfs_trans *tp, struct xfs_buf *agbp,
xfs_agnumber_t agno);
extern int xfs_refcountbt_maxrecs(int blocklen, bool leaf);
extern void xfs_refcountbt_compute_maxlevels(struct xfs_mount *mp);
@ -55,6 +55,7 @@ extern xfs_extlen_t xfs_refcountbt_max_size(struct xfs_mount *mp,
xfs_agblock_t agblocks);
extern int xfs_refcountbt_calc_reserves(struct xfs_mount *mp,
xfs_agnumber_t agno, xfs_extlen_t *ask, xfs_extlen_t *used);
struct xfs_trans *tp, xfs_agnumber_t agno, xfs_extlen_t *ask,
xfs_extlen_t *used);
#endif /* __XFS_REFCOUNT_BTREE_H__ */

Просмотреть файл

@ -670,14 +670,8 @@ xfs_rmap_free(
cur = xfs_rmapbt_init_cursor(mp, tp, agbp, agno);
error = xfs_rmap_unmap(cur, bno, len, false, oinfo);
if (error)
goto out_error;
xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR);
return 0;
out_error:
xfs_btree_del_cursor(cur, XFS_BTREE_ERROR);
xfs_btree_del_cursor(cur, error);
return error;
}
@ -753,19 +747,19 @@ xfs_rmap_map(
&have_lt);
if (error)
goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, have_lt == 1, out_error);
if (have_lt) {
error = xfs_rmap_get_rec(cur, &ltrec, &have_lt);
if (error)
goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, have_lt == 1, out_error);
trace_xfs_rmap_lookup_le_range_result(cur->bc_mp,
cur->bc_private.a.agno, ltrec.rm_startblock,
ltrec.rm_blockcount, ltrec.rm_owner,
ltrec.rm_offset, ltrec.rm_flags);
error = xfs_rmap_get_rec(cur, &ltrec, &have_lt);
if (error)
goto out_error;
XFS_WANT_CORRUPTED_GOTO(mp, have_lt == 1, out_error);
trace_xfs_rmap_lookup_le_range_result(cur->bc_mp,
cur->bc_private.a.agno, ltrec.rm_startblock,
ltrec.rm_blockcount, ltrec.rm_owner,
ltrec.rm_offset, ltrec.rm_flags);
if (!xfs_rmap_is_mergeable(&ltrec, owner, flags))
have_lt = 0;
if (!xfs_rmap_is_mergeable(&ltrec, owner, flags))
have_lt = 0;
}
XFS_WANT_CORRUPTED_GOTO(mp,
have_lt == 0 ||
@ -912,14 +906,8 @@ xfs_rmap_alloc(
cur = xfs_rmapbt_init_cursor(mp, tp, agbp, agno);
error = xfs_rmap_map(cur, bno, len, false, oinfo);
if (error)
goto out_error;
xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR);
return 0;
out_error:
xfs_btree_del_cursor(cur, XFS_BTREE_ERROR);
xfs_btree_del_cursor(cur, error);
return error;
}
@ -2156,7 +2144,7 @@ xfs_rmap_finish_one_cleanup(
if (rcur == NULL)
return;
agbp = rcur->bc_private.a.agbp;
xfs_btree_del_cursor(rcur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_btree_del_cursor(rcur, error);
if (error)
xfs_trans_brelse(tp, agbp);
}
@ -2289,18 +2277,18 @@ xfs_rmap_update_is_needed(
*/
static int
__xfs_rmap_add(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
enum xfs_rmap_intent_type type,
uint64_t owner,
int whichfork,
struct xfs_bmbt_irec *bmap)
{
struct xfs_rmap_intent *ri;
struct xfs_rmap_intent *ri;
trace_xfs_rmap_defer(mp, XFS_FSB_TO_AGNO(mp, bmap->br_startblock),
trace_xfs_rmap_defer(tp->t_mountp,
XFS_FSB_TO_AGNO(tp->t_mountp, bmap->br_startblock),
type,
XFS_FSB_TO_AGBNO(mp, bmap->br_startblock),
XFS_FSB_TO_AGBNO(tp->t_mountp, bmap->br_startblock),
owner, whichfork,
bmap->br_startoff,
bmap->br_blockcount,
@ -2313,23 +2301,22 @@ __xfs_rmap_add(
ri->ri_whichfork = whichfork;
ri->ri_bmap = *bmap;
xfs_defer_add(dfops, XFS_DEFER_OPS_TYPE_RMAP, &ri->ri_list);
xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_RMAP, &ri->ri_list);
return 0;
}
/* Map an extent into a file. */
int
xfs_rmap_map_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
struct xfs_inode *ip,
int whichfork,
struct xfs_bmbt_irec *PREV)
{
if (!xfs_rmap_update_is_needed(mp, whichfork))
if (!xfs_rmap_update_is_needed(tp->t_mountp, whichfork))
return 0;
return __xfs_rmap_add(mp, dfops, xfs_is_reflink_inode(ip) ?
return __xfs_rmap_add(tp, xfs_is_reflink_inode(ip) ?
XFS_RMAP_MAP_SHARED : XFS_RMAP_MAP, ip->i_ino,
whichfork, PREV);
}
@ -2337,25 +2324,29 @@ xfs_rmap_map_extent(
/* Unmap an extent out of a file. */
int
xfs_rmap_unmap_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
struct xfs_inode *ip,
int whichfork,
struct xfs_bmbt_irec *PREV)
{
if (!xfs_rmap_update_is_needed(mp, whichfork))
if (!xfs_rmap_update_is_needed(tp->t_mountp, whichfork))
return 0;
return __xfs_rmap_add(mp, dfops, xfs_is_reflink_inode(ip) ?
return __xfs_rmap_add(tp, xfs_is_reflink_inode(ip) ?
XFS_RMAP_UNMAP_SHARED : XFS_RMAP_UNMAP, ip->i_ino,
whichfork, PREV);
}
/* Convert a data fork extent from unwritten to real or vice versa. */
/*
* Convert a data fork extent from unwritten to real or vice versa.
*
* Note that tp can be NULL here as no transaction is used for COW fork
* unwritten conversion.
*/
int
xfs_rmap_convert_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
struct xfs_inode *ip,
int whichfork,
struct xfs_bmbt_irec *PREV)
@ -2363,7 +2354,7 @@ xfs_rmap_convert_extent(
if (!xfs_rmap_update_is_needed(mp, whichfork))
return 0;
return __xfs_rmap_add(mp, dfops, xfs_is_reflink_inode(ip) ?
return __xfs_rmap_add(tp, xfs_is_reflink_inode(ip) ?
XFS_RMAP_CONVERT_SHARED : XFS_RMAP_CONVERT, ip->i_ino,
whichfork, PREV);
}
@ -2371,8 +2362,7 @@ xfs_rmap_convert_extent(
/* Schedule the creation of an rmap for non-file data. */
int
xfs_rmap_alloc_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
xfs_agnumber_t agno,
xfs_agblock_t bno,
xfs_extlen_t len,
@ -2380,23 +2370,21 @@ xfs_rmap_alloc_extent(
{
struct xfs_bmbt_irec bmap;
if (!xfs_rmap_update_is_needed(mp, XFS_DATA_FORK))
if (!xfs_rmap_update_is_needed(tp->t_mountp, XFS_DATA_FORK))
return 0;
bmap.br_startblock = XFS_AGB_TO_FSB(mp, agno, bno);
bmap.br_startblock = XFS_AGB_TO_FSB(tp->t_mountp, agno, bno);
bmap.br_blockcount = len;
bmap.br_startoff = 0;
bmap.br_state = XFS_EXT_NORM;
return __xfs_rmap_add(mp, dfops, XFS_RMAP_ALLOC, owner,
XFS_DATA_FORK, &bmap);
return __xfs_rmap_add(tp, XFS_RMAP_ALLOC, owner, XFS_DATA_FORK, &bmap);
}
/* Schedule the deletion of an rmap for non-file data. */
int
xfs_rmap_free_extent(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops,
struct xfs_trans *tp,
xfs_agnumber_t agno,
xfs_agblock_t bno,
xfs_extlen_t len,
@ -2404,16 +2392,15 @@ xfs_rmap_free_extent(
{
struct xfs_bmbt_irec bmap;
if (!xfs_rmap_update_is_needed(mp, XFS_DATA_FORK))
if (!xfs_rmap_update_is_needed(tp->t_mountp, XFS_DATA_FORK))
return 0;
bmap.br_startblock = XFS_AGB_TO_FSB(mp, agno, bno);
bmap.br_startblock = XFS_AGB_TO_FSB(tp->t_mountp, agno, bno);
bmap.br_blockcount = len;
bmap.br_startoff = 0;
bmap.br_state = XFS_EXT_NORM;
return __xfs_rmap_add(mp, dfops, XFS_RMAP_FREE, owner,
XFS_DATA_FORK, &bmap);
return __xfs_rmap_add(tp, XFS_RMAP_FREE, owner, XFS_DATA_FORK, &bmap);
}
/* Compare rmap records. Returns -1 if a < b, 1 if a > b, and 0 if equal. */

Просмотреть файл

@ -185,21 +185,17 @@ struct xfs_rmap_intent {
};
/* functions for updating the rmapbt based on bmbt map/unmap operations */
int xfs_rmap_map_extent(struct xfs_mount *mp, struct xfs_defer_ops *dfops,
int xfs_rmap_map_extent(struct xfs_trans *tp, struct xfs_inode *ip,
int whichfork, struct xfs_bmbt_irec *imap);
int xfs_rmap_unmap_extent(struct xfs_trans *tp, struct xfs_inode *ip,
int whichfork, struct xfs_bmbt_irec *imap);
int xfs_rmap_convert_extent(struct xfs_mount *mp, struct xfs_trans *tp,
struct xfs_inode *ip, int whichfork,
struct xfs_bmbt_irec *imap);
int xfs_rmap_unmap_extent(struct xfs_mount *mp, struct xfs_defer_ops *dfops,
struct xfs_inode *ip, int whichfork,
struct xfs_bmbt_irec *imap);
int xfs_rmap_convert_extent(struct xfs_mount *mp, struct xfs_defer_ops *dfops,
struct xfs_inode *ip, int whichfork,
struct xfs_bmbt_irec *imap);
int xfs_rmap_alloc_extent(struct xfs_mount *mp, struct xfs_defer_ops *dfops,
xfs_agnumber_t agno, xfs_agblock_t bno, xfs_extlen_t len,
uint64_t owner);
int xfs_rmap_free_extent(struct xfs_mount *mp, struct xfs_defer_ops *dfops,
xfs_agnumber_t agno, xfs_agblock_t bno, xfs_extlen_t len,
uint64_t owner);
int xfs_rmap_alloc_extent(struct xfs_trans *tp, xfs_agnumber_t agno,
xfs_agblock_t bno, xfs_extlen_t len, uint64_t owner);
int xfs_rmap_free_extent(struct xfs_trans *tp, xfs_agnumber_t agno,
xfs_agblock_t bno, xfs_extlen_t len, uint64_t owner);
void xfs_rmap_finish_one_cleanup(struct xfs_trans *tp,
struct xfs_btree_cur *rcur, int error);

Просмотреть файл

@ -554,6 +554,7 @@ xfs_rmapbt_max_size(
int
xfs_rmapbt_calc_reserves(
struct xfs_mount *mp,
struct xfs_trans *tp,
xfs_agnumber_t agno,
xfs_extlen_t *ask,
xfs_extlen_t *used)
@ -567,14 +568,14 @@ xfs_rmapbt_calc_reserves(
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
return 0;
error = xfs_alloc_read_agf(mp, NULL, agno, 0, &agbp);
error = xfs_alloc_read_agf(mp, tp, agno, 0, &agbp);
if (error)
return error;
agf = XFS_BUF_TO_AGF(agbp);
agblocks = be32_to_cpu(agf->agf_length);
tree_len = be32_to_cpu(agf->agf_rmap_blocks);
xfs_buf_relse(agbp);
xfs_trans_brelse(tp, agbp);
/* Reserve 1% of the AG or enough for 1 block per record. */
*ask += max(agblocks / 100, xfs_rmapbt_max_size(mp, agblocks));

Просмотреть файл

@ -51,7 +51,7 @@ extern xfs_extlen_t xfs_rmapbt_calc_size(struct xfs_mount *mp,
extern xfs_extlen_t xfs_rmapbt_max_size(struct xfs_mount *mp,
xfs_agblock_t agblocks);
extern int xfs_rmapbt_calc_reserves(struct xfs_mount *mp,
extern int xfs_rmapbt_calc_reserves(struct xfs_mount *mp, struct xfs_trans *tp,
xfs_agnumber_t agno, xfs_extlen_t *ask, xfs_extlen_t *used);
#endif /* __XFS_RMAP_BTREE_H__ */

Просмотреть файл

@ -96,82 +96,148 @@ xfs_perag_put(
trace_xfs_perag_put(pag->pag_mount, pag->pag_agno, ref, _RET_IP_);
}
/*
* Check the validity of the SB found.
*/
/* Check all the superblock fields we care about when reading one in. */
STATIC int
xfs_mount_validate_sb(
xfs_mount_t *mp,
xfs_sb_t *sbp,
bool check_inprogress,
bool check_version)
xfs_validate_sb_read(
struct xfs_mount *mp,
struct xfs_sb *sbp)
{
uint32_t agcount = 0;
uint32_t rem;
if (XFS_SB_VERSION_NUM(sbp) != XFS_SB_VERSION_5)
return 0;
/*
* Version 5 superblock feature mask validation. Reject combinations
* the kernel cannot support up front before checking anything else.
*/
if (xfs_sb_has_compat_feature(sbp, XFS_SB_FEAT_COMPAT_UNKNOWN)) {
xfs_warn(mp,
"Superblock has unknown compatible features (0x%x) enabled.",
(sbp->sb_features_compat & XFS_SB_FEAT_COMPAT_UNKNOWN));
xfs_warn(mp,
"Using a more recent kernel is recommended.");
}
if (xfs_sb_has_ro_compat_feature(sbp, XFS_SB_FEAT_RO_COMPAT_UNKNOWN)) {
xfs_alert(mp,
"Superblock has unknown read-only compatible features (0x%x) enabled.",
(sbp->sb_features_ro_compat &
XFS_SB_FEAT_RO_COMPAT_UNKNOWN));
if (!(mp->m_flags & XFS_MOUNT_RDONLY)) {
xfs_warn(mp,
"Attempted to mount read-only compatible filesystem read-write.");
xfs_warn(mp,
"Filesystem can only be safely mounted read only.");
return -EINVAL;
}
}
if (xfs_sb_has_incompat_feature(sbp, XFS_SB_FEAT_INCOMPAT_UNKNOWN)) {
xfs_warn(mp,
"Superblock has unknown incompatible features (0x%x) enabled.",
(sbp->sb_features_incompat &
XFS_SB_FEAT_INCOMPAT_UNKNOWN));
xfs_warn(mp,
"Filesystem cannot be safely mounted by this kernel.");
return -EINVAL;
}
return 0;
}
/* Check all the superblock fields we care about when writing one out. */
STATIC int
xfs_validate_sb_write(
struct xfs_mount *mp,
struct xfs_buf *bp,
struct xfs_sb *sbp)
{
/*
* Carry out additional sb summary counter sanity checks when we write
* the superblock. We skip this in the read validator because there
* could be newer superblocks in the log and if the values are garbage
* even after replay we'll recalculate them at the end of log mount.
*
* mkfs has traditionally written zeroed counters to inprogress and
* secondary superblocks, so allow this usage to continue because
* we never read counters from such superblocks.
*/
if (XFS_BUF_ADDR(bp) == XFS_SB_DADDR && !sbp->sb_inprogress &&
(sbp->sb_fdblocks > sbp->sb_dblocks ||
!xfs_verify_icount(mp, sbp->sb_icount) ||
sbp->sb_ifree > sbp->sb_icount)) {
xfs_warn(mp, "SB summary counter sanity check failed");
return -EFSCORRUPTED;
}
if (XFS_SB_VERSION_NUM(sbp) != XFS_SB_VERSION_5)
return 0;
/*
* Version 5 superblock feature mask validation. Reject combinations
* the kernel cannot support since we checked for unsupported bits in
* the read verifier, which means that memory is corrupt.
*/
if (xfs_sb_has_compat_feature(sbp, XFS_SB_FEAT_COMPAT_UNKNOWN)) {
xfs_warn(mp,
"Corruption detected in superblock compatible features (0x%x)!",
(sbp->sb_features_compat & XFS_SB_FEAT_COMPAT_UNKNOWN));
return -EFSCORRUPTED;
}
if (xfs_sb_has_ro_compat_feature(sbp, XFS_SB_FEAT_RO_COMPAT_UNKNOWN)) {
xfs_alert(mp,
"Corruption detected in superblock read-only compatible features (0x%x)!",
(sbp->sb_features_ro_compat &
XFS_SB_FEAT_RO_COMPAT_UNKNOWN));
return -EFSCORRUPTED;
}
if (xfs_sb_has_incompat_feature(sbp, XFS_SB_FEAT_INCOMPAT_UNKNOWN)) {
xfs_warn(mp,
"Corruption detected in superblock incompatible features (0x%x)!",
(sbp->sb_features_incompat &
XFS_SB_FEAT_INCOMPAT_UNKNOWN));
return -EFSCORRUPTED;
}
if (xfs_sb_has_incompat_log_feature(sbp,
XFS_SB_FEAT_INCOMPAT_LOG_UNKNOWN)) {
xfs_warn(mp,
"Corruption detected in superblock incompatible log features (0x%x)!",
(sbp->sb_features_log_incompat &
XFS_SB_FEAT_INCOMPAT_LOG_UNKNOWN));
return -EFSCORRUPTED;
}
/*
* We can't read verify the sb LSN because the read verifier is called
* before the log is allocated and processed. We know the log is set up
* before write verifier calls, so check it here.
*/
if (!xfs_log_check_lsn(mp, sbp->sb_lsn))
return -EFSCORRUPTED;
return 0;
}
/* Check the validity of the SB. */
STATIC int
xfs_validate_sb_common(
struct xfs_mount *mp,
struct xfs_buf *bp,
struct xfs_sb *sbp)
{
uint32_t agcount = 0;
uint32_t rem;
if (sbp->sb_magicnum != XFS_SB_MAGIC) {
xfs_warn(mp, "bad magic number");
return -EWRONGFS;
}
if (!xfs_sb_good_version(sbp)) {
xfs_warn(mp, "bad version");
return -EWRONGFS;
}
/*
* Version 5 superblock feature mask validation. Reject combinations the
* kernel cannot support up front before checking anything else. For
* write validation, we don't need to check feature masks.
*/
if (check_version && XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) {
if (xfs_sb_has_compat_feature(sbp,
XFS_SB_FEAT_COMPAT_UNKNOWN)) {
xfs_warn(mp,
"Superblock has unknown compatible features (0x%x) enabled.",
(sbp->sb_features_compat &
XFS_SB_FEAT_COMPAT_UNKNOWN));
xfs_warn(mp,
"Using a more recent kernel is recommended.");
}
if (xfs_sb_has_ro_compat_feature(sbp,
XFS_SB_FEAT_RO_COMPAT_UNKNOWN)) {
xfs_alert(mp,
"Superblock has unknown read-only compatible features (0x%x) enabled.",
(sbp->sb_features_ro_compat &
XFS_SB_FEAT_RO_COMPAT_UNKNOWN));
if (!(mp->m_flags & XFS_MOUNT_RDONLY)) {
xfs_warn(mp,
"Attempted to mount read-only compatible filesystem read-write.");
xfs_warn(mp,
"Filesystem can only be safely mounted read only.");
return -EINVAL;
}
}
if (xfs_sb_has_incompat_feature(sbp,
XFS_SB_FEAT_INCOMPAT_UNKNOWN)) {
xfs_warn(mp,
"Superblock has unknown incompatible features (0x%x) enabled.",
(sbp->sb_features_incompat &
XFS_SB_FEAT_INCOMPAT_UNKNOWN));
xfs_warn(mp,
"Filesystem can not be safely mounted by this kernel.");
return -EINVAL;
}
} else if (xfs_sb_version_hascrc(sbp)) {
/*
* We can't read verify the sb LSN because the read verifier is
* called before the log is allocated and processed. We know the
* log is set up before write verifier (!check_version) calls,
* so just check it here.
*/
if (!xfs_log_check_lsn(mp, sbp->sb_lsn))
return -EFSCORRUPTED;
}
if (xfs_sb_version_has_pquotino(sbp)) {
if (sbp->sb_qflags & (XFS_OQUOTA_ENFD | XFS_OQUOTA_CHKD)) {
xfs_notice(mp,
@ -321,7 +387,12 @@ xfs_mount_validate_sb(
return -EFBIG;
}
if (check_inprogress && sbp->sb_inprogress) {
/*
* Don't touch the filesystem if a user tool thinks it owns the primary
* superblock. mkfs doesn't clear the flag from secondary supers, so
* we don't check them at all.
*/
if (XFS_BUF_ADDR(bp) == XFS_SB_DADDR && sbp->sb_inprogress) {
xfs_warn(mp, "Offline file system operation in progress!");
return -EFSCORRUPTED;
}
@ -596,29 +667,6 @@ xfs_sb_to_disk(
}
}
static int
xfs_sb_verify(
struct xfs_buf *bp,
bool check_version)
{
struct xfs_mount *mp = bp->b_target->bt_mount;
struct xfs_sb sb;
/*
* Use call variant which doesn't convert quota flags from disk
* format, because xfs_mount_validate_sb checks the on-disk flags.
*/
__xfs_sb_from_disk(&sb, XFS_BUF_TO_SBP(bp), false);
/*
* Only check the in progress field for the primary superblock as
* mkfs.xfs doesn't clear it from secondary superblocks.
*/
return xfs_mount_validate_sb(mp, &sb,
bp->b_maps[0].bm_bn == XFS_SB_DADDR,
check_version);
}
/*
* If the superblock has the CRC feature bit set or the CRC field is non-null,
* check that the CRC is valid. We check the CRC field is non-null because a
@ -633,11 +681,12 @@ xfs_sb_verify(
*/
static void
xfs_sb_read_verify(
struct xfs_buf *bp)
struct xfs_buf *bp)
{
struct xfs_mount *mp = bp->b_target->bt_mount;
struct xfs_dsb *dsb = XFS_BUF_TO_SBP(bp);
int error;
struct xfs_sb sb;
struct xfs_mount *mp = bp->b_target->bt_mount;
struct xfs_dsb *dsb = XFS_BUF_TO_SBP(bp);
int error;
/*
* open code the version check to avoid needing to convert the entire
@ -657,7 +706,16 @@ xfs_sb_read_verify(
}
}
}
error = xfs_sb_verify(bp, true);
/*
* Check all the superblock fields. Don't byteswap the xquota flags
* because _verify_common checks the on-disk values.
*/
__xfs_sb_from_disk(&sb, XFS_BUF_TO_SBP(bp), false);
error = xfs_validate_sb_common(mp, bp, &sb);
if (error)
goto out_error;
error = xfs_validate_sb_read(mp, &sb);
out_error:
if (error == -EFSCORRUPTED || error == -EFSBADCRC)
@ -691,15 +749,22 @@ static void
xfs_sb_write_verify(
struct xfs_buf *bp)
{
struct xfs_sb sb;
struct xfs_mount *mp = bp->b_target->bt_mount;
struct xfs_buf_log_item *bip = bp->b_log_item;
int error;
error = xfs_sb_verify(bp, false);
if (error) {
xfs_verifier_error(bp, error, __this_address);
return;
}
/*
* Check all the superblock fields. Don't byteswap the xquota flags
* because _verify_common checks the on-disk values.
*/
__xfs_sb_from_disk(&sb, XFS_BUF_TO_SBP(bp), false);
error = xfs_validate_sb_common(mp, bp, &sb);
if (error)
goto out_error;
error = xfs_validate_sb_write(mp, bp, &sb);
if (error)
goto out_error;
if (!xfs_sb_version_hascrc(&mp->m_sb))
return;
@ -708,6 +773,10 @@ xfs_sb_write_verify(
XFS_BUF_TO_SBP(bp)->sb_lsn = cpu_to_be64(bip->bli_item.li_lsn);
xfs_buf_update_cksum(bp, XFS_SB_CRC_OFF);
return;
out_error:
xfs_verifier_error(bp, error, __this_address);
}
const struct xfs_buf_ops xfs_sb_buf_ops = {
@ -804,6 +873,7 @@ xfs_initialize_perag_data(
uint64_t bfree = 0;
uint64_t bfreelst = 0;
uint64_t btree = 0;
uint64_t fdblocks;
int error;
for (index = 0; index < agcount; index++) {
@ -827,17 +897,31 @@ xfs_initialize_perag_data(
btree += pag->pagf_btreeblks;
xfs_perag_put(pag);
}
fdblocks = bfree + bfreelst + btree;
/*
* If the new summary counts are obviously incorrect, fail the
* mount operation because that implies the AGFs are also corrupt.
* Clear BAD_SUMMARY so that we don't unmount with a dirty log, which
* will prevent xfs_repair from fixing anything.
*/
if (fdblocks > sbp->sb_dblocks || ifree > ialloc) {
xfs_alert(mp, "AGF corruption. Please run xfs_repair.");
error = -EFSCORRUPTED;
goto out;
}
/* Overwrite incore superblock counters with just-read data */
spin_lock(&mp->m_sb_lock);
sbp->sb_ifree = ifree;
sbp->sb_icount = ialloc;
sbp->sb_fdblocks = bfree + bfreelst + btree;
sbp->sb_fdblocks = fdblocks;
spin_unlock(&mp->m_sb_lock);
xfs_reinit_percpu_counters(mp);
return 0;
out:
mp->m_flags &= ~XFS_MOUNT_BAD_SUMMARY;
return error;
}
/*

Просмотреть файл

@ -64,6 +64,18 @@ void xfs_log_get_max_trans_res(struct xfs_mount *mp,
#define XFS_TRANS_RESERVE 0x20 /* OK to use reserved data blocks */
#define XFS_TRANS_NO_WRITECOUNT 0x40 /* do not elevate SB writecount */
#define XFS_TRANS_NOFS 0x80 /* pass KM_NOFS to kmem_alloc */
/*
* LOWMODE is used by the allocator to activate the lowspace algorithm - when
* free space is running low the extent allocator may choose to allocate an
* extent from an AG without leaving sufficient space for a btree split when
* inserting the new extent. In this case the allocator will enable the
* lowspace algorithm which is supposed to allow further allocations (such as
* btree splits and newroots) to allocate from sequential AGs. In order to
* avoid locking AGs out of order the lowspace algorithm will start searching
* for free space from AG 0. If the correct transaction reservations have been
* made then this algorithm will eventually find all the space it needs.
*/
#define XFS_TRANS_LOWMODE 0x100 /* allocate in low space mode */
/*
* Field values for xfs_trans_mod_sb.

Просмотреть файл

@ -171,3 +171,37 @@ xfs_verify_rtbno(
{
return rtbno < mp->m_sb.sb_rblocks;
}
/* Calculate the range of valid icount values. */
static void
xfs_icount_range(
struct xfs_mount *mp,
unsigned long long *min,
unsigned long long *max)
{
unsigned long long nr_inos = 0;
xfs_agnumber_t agno;
/* root, rtbitmap, rtsum all live in the first chunk */
*min = XFS_INODES_PER_CHUNK;
for (agno = 0; agno < mp->m_sb.sb_agcount; agno++) {
xfs_agino_t first, last;
xfs_agino_range(mp, agno, &first, &last);
nr_inos += last - first + 1;
}
*max = nr_inos;
}
/* Sanity-checking of inode counts. */
bool
xfs_verify_icount(
struct xfs_mount *mp,
unsigned long long icount)
{
unsigned long long min, max;
xfs_icount_range(mp, &min, &max);
return icount >= min && icount <= max;
}

Просмотреть файл

@ -165,5 +165,6 @@ bool xfs_verify_ino(struct xfs_mount *mp, xfs_ino_t ino);
bool xfs_internal_inum(struct xfs_mount *mp, xfs_ino_t ino);
bool xfs_verify_dir_ino(struct xfs_mount *mp, xfs_ino_t ino);
bool xfs_verify_rtbno(struct xfs_mount *mp, xfs_rtblock_t rtbno);
bool xfs_verify_icount(struct xfs_mount *mp, unsigned long long icount);
#endif /* __XFS_TYPES_H__ */

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -17,24 +17,31 @@
#include "xfs_sb.h"
#include "xfs_inode.h"
#include "xfs_alloc.h"
#include "xfs_alloc_btree.h"
#include "xfs_ialloc.h"
#include "xfs_ialloc_btree.h"
#include "xfs_rmap.h"
#include "xfs_rmap_btree.h"
#include "xfs_refcount.h"
#include "xfs_refcount_btree.h"
#include "scrub/xfs_scrub.h"
#include "scrub/scrub.h"
#include "scrub/common.h"
#include "scrub/trace.h"
#include "scrub/repair.h"
#include "scrub/bitmap.h"
/* Superblock */
/* Repair the superblock. */
int
xfs_repair_superblock(
struct xfs_scrub_context *sc)
xrep_superblock(
struct xfs_scrub *sc)
{
struct xfs_mount *mp = sc->mp;
struct xfs_buf *bp;
xfs_agnumber_t agno;
int error;
struct xfs_mount *mp = sc->mp;
struct xfs_buf *bp;
xfs_agnumber_t agno;
int error;
/* Don't try to repair AG 0's sb; let xfs_repair deal with it. */
agno = sc->sm->sm_agno;
@ -54,3 +61,873 @@ xfs_repair_superblock(
xfs_trans_log_buf(sc->tp, bp, 0, BBTOB(bp->b_length) - 1);
return error;
}
/* AGF */
struct xrep_agf_allocbt {
struct xfs_scrub *sc;
xfs_agblock_t freeblks;
xfs_agblock_t longest;
};
/* Record free space shape information. */
STATIC int
xrep_agf_walk_allocbt(
struct xfs_btree_cur *cur,
struct xfs_alloc_rec_incore *rec,
void *priv)
{
struct xrep_agf_allocbt *raa = priv;
int error = 0;
if (xchk_should_terminate(raa->sc, &error))
return error;
raa->freeblks += rec->ar_blockcount;
if (rec->ar_blockcount > raa->longest)
raa->longest = rec->ar_blockcount;
return error;
}
/* Does this AGFL block look sane? */
STATIC int
xrep_agf_check_agfl_block(
struct xfs_mount *mp,
xfs_agblock_t agbno,
void *priv)
{
struct xfs_scrub *sc = priv;
if (!xfs_verify_agbno(mp, sc->sa.agno, agbno))
return -EFSCORRUPTED;
return 0;
}
/*
* Offset within the xrep_find_ag_btree array for each btree type. Avoid the
* XFS_BTNUM_ names here to avoid creating a sparse array.
*/
enum {
XREP_AGF_BNOBT = 0,
XREP_AGF_CNTBT,
XREP_AGF_RMAPBT,
XREP_AGF_REFCOUNTBT,
XREP_AGF_END,
XREP_AGF_MAX
};
/* Check a btree root candidate. */
static inline bool
xrep_check_btree_root(
struct xfs_scrub *sc,
struct xrep_find_ag_btree *fab)
{
struct xfs_mount *mp = sc->mp;
xfs_agnumber_t agno = sc->sm->sm_agno;
return xfs_verify_agbno(mp, agno, fab->root) &&
fab->height <= XFS_BTREE_MAXLEVELS;
}
/*
* Given the btree roots described by *fab, find the roots, check them for
* sanity, and pass the root data back out via *fab.
*
* This is /also/ a chicken and egg problem because we have to use the rmapbt
* (rooted in the AGF) to find the btrees rooted in the AGF. We also have no
* idea if the btrees make any sense. If we hit obvious corruptions in those
* btrees we'll bail out.
*/
STATIC int
xrep_agf_find_btrees(
struct xfs_scrub *sc,
struct xfs_buf *agf_bp,
struct xrep_find_ag_btree *fab,
struct xfs_buf *agfl_bp)
{
struct xfs_agf *old_agf = XFS_BUF_TO_AGF(agf_bp);
int error;
/* Go find the root data. */
error = xrep_find_ag_btree_roots(sc, agf_bp, fab, agfl_bp);
if (error)
return error;
/* We must find the bnobt, cntbt, and rmapbt roots. */
if (!xrep_check_btree_root(sc, &fab[XREP_AGF_BNOBT]) ||
!xrep_check_btree_root(sc, &fab[XREP_AGF_CNTBT]) ||
!xrep_check_btree_root(sc, &fab[XREP_AGF_RMAPBT]))
return -EFSCORRUPTED;
/*
* We relied on the rmapbt to reconstruct the AGF. If we get a
* different root then something's seriously wrong.
*/
if (fab[XREP_AGF_RMAPBT].root !=
be32_to_cpu(old_agf->agf_roots[XFS_BTNUM_RMAPi]))
return -EFSCORRUPTED;
/* We must find the refcountbt root if that feature is enabled. */
if (xfs_sb_version_hasreflink(&sc->mp->m_sb) &&
!xrep_check_btree_root(sc, &fab[XREP_AGF_REFCOUNTBT]))
return -EFSCORRUPTED;
return 0;
}
/*
* Reinitialize the AGF header, making an in-core copy of the old contents so
* that we know which in-core state needs to be reinitialized.
*/
STATIC void
xrep_agf_init_header(
struct xfs_scrub *sc,
struct xfs_buf *agf_bp,
struct xfs_agf *old_agf)
{
struct xfs_mount *mp = sc->mp;
struct xfs_agf *agf = XFS_BUF_TO_AGF(agf_bp);
memcpy(old_agf, agf, sizeof(*old_agf));
memset(agf, 0, BBTOB(agf_bp->b_length));
agf->agf_magicnum = cpu_to_be32(XFS_AGF_MAGIC);
agf->agf_versionnum = cpu_to_be32(XFS_AGF_VERSION);
agf->agf_seqno = cpu_to_be32(sc->sa.agno);
agf->agf_length = cpu_to_be32(xfs_ag_block_count(mp, sc->sa.agno));
agf->agf_flfirst = old_agf->agf_flfirst;
agf->agf_fllast = old_agf->agf_fllast;
agf->agf_flcount = old_agf->agf_flcount;
if (xfs_sb_version_hascrc(&mp->m_sb))
uuid_copy(&agf->agf_uuid, &mp->m_sb.sb_meta_uuid);
/* Mark the incore AGF data stale until we're done fixing things. */
ASSERT(sc->sa.pag->pagf_init);
sc->sa.pag->pagf_init = 0;
}
/* Set btree root information in an AGF. */
STATIC void
xrep_agf_set_roots(
struct xfs_scrub *sc,
struct xfs_agf *agf,
struct xrep_find_ag_btree *fab)
{
agf->agf_roots[XFS_BTNUM_BNOi] =
cpu_to_be32(fab[XREP_AGF_BNOBT].root);
agf->agf_levels[XFS_BTNUM_BNOi] =
cpu_to_be32(fab[XREP_AGF_BNOBT].height);
agf->agf_roots[XFS_BTNUM_CNTi] =
cpu_to_be32(fab[XREP_AGF_CNTBT].root);
agf->agf_levels[XFS_BTNUM_CNTi] =
cpu_to_be32(fab[XREP_AGF_CNTBT].height);
agf->agf_roots[XFS_BTNUM_RMAPi] =
cpu_to_be32(fab[XREP_AGF_RMAPBT].root);
agf->agf_levels[XFS_BTNUM_RMAPi] =
cpu_to_be32(fab[XREP_AGF_RMAPBT].height);
if (xfs_sb_version_hasreflink(&sc->mp->m_sb)) {
agf->agf_refcount_root =
cpu_to_be32(fab[XREP_AGF_REFCOUNTBT].root);
agf->agf_refcount_level =
cpu_to_be32(fab[XREP_AGF_REFCOUNTBT].height);
}
}
/* Update all AGF fields which derive from btree contents. */
STATIC int
xrep_agf_calc_from_btrees(
struct xfs_scrub *sc,
struct xfs_buf *agf_bp)
{
struct xrep_agf_allocbt raa = { .sc = sc };
struct xfs_btree_cur *cur = NULL;
struct xfs_agf *agf = XFS_BUF_TO_AGF(agf_bp);
struct xfs_mount *mp = sc->mp;
xfs_agblock_t btreeblks;
xfs_agblock_t blocks;
int error;
/* Update the AGF counters from the bnobt. */
cur = xfs_allocbt_init_cursor(mp, sc->tp, agf_bp, sc->sa.agno,
XFS_BTNUM_BNO);
error = xfs_alloc_query_all(cur, xrep_agf_walk_allocbt, &raa);
if (error)
goto err;
error = xfs_btree_count_blocks(cur, &blocks);
if (error)
goto err;
xfs_btree_del_cursor(cur, error);
btreeblks = blocks - 1;
agf->agf_freeblks = cpu_to_be32(raa.freeblks);
agf->agf_longest = cpu_to_be32(raa.longest);
/* Update the AGF counters from the cntbt. */
cur = xfs_allocbt_init_cursor(mp, sc->tp, agf_bp, sc->sa.agno,
XFS_BTNUM_CNT);
error = xfs_btree_count_blocks(cur, &blocks);
if (error)
goto err;
xfs_btree_del_cursor(cur, error);
btreeblks += blocks - 1;
/* Update the AGF counters from the rmapbt. */
cur = xfs_rmapbt_init_cursor(mp, sc->tp, agf_bp, sc->sa.agno);
error = xfs_btree_count_blocks(cur, &blocks);
if (error)
goto err;
xfs_btree_del_cursor(cur, error);
agf->agf_rmap_blocks = cpu_to_be32(blocks);
btreeblks += blocks - 1;
agf->agf_btreeblks = cpu_to_be32(btreeblks);
/* Update the AGF counters from the refcountbt. */
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
cur = xfs_refcountbt_init_cursor(mp, sc->tp, agf_bp,
sc->sa.agno);
error = xfs_btree_count_blocks(cur, &blocks);
if (error)
goto err;
xfs_btree_del_cursor(cur, error);
agf->agf_refcount_blocks = cpu_to_be32(blocks);
}
return 0;
err:
xfs_btree_del_cursor(cur, error);
return error;
}
/* Commit the new AGF and reinitialize the incore state. */
STATIC int
xrep_agf_commit_new(
struct xfs_scrub *sc,
struct xfs_buf *agf_bp)
{
struct xfs_perag *pag;
struct xfs_agf *agf = XFS_BUF_TO_AGF(agf_bp);
/* Trigger fdblocks recalculation */
xfs_force_summary_recalc(sc->mp);
/* Write this to disk. */
xfs_trans_buf_set_type(sc->tp, agf_bp, XFS_BLFT_AGF_BUF);
xfs_trans_log_buf(sc->tp, agf_bp, 0, BBTOB(agf_bp->b_length) - 1);
/* Now reinitialize the in-core counters we changed. */
pag = sc->sa.pag;
pag->pagf_btreeblks = be32_to_cpu(agf->agf_btreeblks);
pag->pagf_freeblks = be32_to_cpu(agf->agf_freeblks);
pag->pagf_longest = be32_to_cpu(agf->agf_longest);
pag->pagf_levels[XFS_BTNUM_BNOi] =
be32_to_cpu(agf->agf_levels[XFS_BTNUM_BNOi]);
pag->pagf_levels[XFS_BTNUM_CNTi] =
be32_to_cpu(agf->agf_levels[XFS_BTNUM_CNTi]);
pag->pagf_levels[XFS_BTNUM_RMAPi] =
be32_to_cpu(agf->agf_levels[XFS_BTNUM_RMAPi]);
pag->pagf_refcount_level = be32_to_cpu(agf->agf_refcount_level);
pag->pagf_init = 1;
return 0;
}
/* Repair the AGF. v5 filesystems only. */
int
xrep_agf(
struct xfs_scrub *sc)
{
struct xrep_find_ag_btree fab[XREP_AGF_MAX] = {
[XREP_AGF_BNOBT] = {
.rmap_owner = XFS_RMAP_OWN_AG,
.buf_ops = &xfs_allocbt_buf_ops,
.magic = XFS_ABTB_CRC_MAGIC,
},
[XREP_AGF_CNTBT] = {
.rmap_owner = XFS_RMAP_OWN_AG,
.buf_ops = &xfs_allocbt_buf_ops,
.magic = XFS_ABTC_CRC_MAGIC,
},
[XREP_AGF_RMAPBT] = {
.rmap_owner = XFS_RMAP_OWN_AG,
.buf_ops = &xfs_rmapbt_buf_ops,
.magic = XFS_RMAP_CRC_MAGIC,
},
[XREP_AGF_REFCOUNTBT] = {
.rmap_owner = XFS_RMAP_OWN_REFC,
.buf_ops = &xfs_refcountbt_buf_ops,
.magic = XFS_REFC_CRC_MAGIC,
},
[XREP_AGF_END] = {
.buf_ops = NULL,
},
};
struct xfs_agf old_agf;
struct xfs_mount *mp = sc->mp;
struct xfs_buf *agf_bp;
struct xfs_buf *agfl_bp;
struct xfs_agf *agf;
int error;
/* We require the rmapbt to rebuild anything. */
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
return -EOPNOTSUPP;
xchk_perag_get(sc->mp, &sc->sa);
/*
* Make sure we have the AGF buffer, as scrub might have decided it
* was corrupt after xfs_alloc_read_agf failed with -EFSCORRUPTED.
*/
error = xfs_trans_read_buf(mp, sc->tp, mp->m_ddev_targp,
XFS_AG_DADDR(mp, sc->sa.agno, XFS_AGF_DADDR(mp)),
XFS_FSS_TO_BB(mp, 1), 0, &agf_bp, NULL);
if (error)
return error;
agf_bp->b_ops = &xfs_agf_buf_ops;
agf = XFS_BUF_TO_AGF(agf_bp);
/*
* Load the AGFL so that we can screen out OWN_AG blocks that are on
* the AGFL now; these blocks might have once been part of the
* bno/cnt/rmap btrees but are not now. This is a chicken and egg
* problem: the AGF is corrupt, so we have to trust the AGFL contents
* because we can't do any serious cross-referencing with any of the
* btrees rooted in the AGF. If the AGFL contents are obviously bad
* then we'll bail out.
*/
error = xfs_alloc_read_agfl(mp, sc->tp, sc->sa.agno, &agfl_bp);
if (error)
return error;
/*
* Spot-check the AGFL blocks; if they're obviously corrupt then
* there's nothing we can do but bail out.
*/
error = xfs_agfl_walk(sc->mp, XFS_BUF_TO_AGF(agf_bp), agfl_bp,
xrep_agf_check_agfl_block, sc);
if (error)
return error;
/*
* Find the AGF btree roots. This is also a chicken-and-egg situation;
* see the function for more details.
*/
error = xrep_agf_find_btrees(sc, agf_bp, fab, agfl_bp);
if (error)
return error;
/* Start rewriting the header and implant the btrees we found. */
xrep_agf_init_header(sc, agf_bp, &old_agf);
xrep_agf_set_roots(sc, agf, fab);
error = xrep_agf_calc_from_btrees(sc, agf_bp);
if (error)
goto out_revert;
/* Commit the changes and reinitialize incore state. */
return xrep_agf_commit_new(sc, agf_bp);
out_revert:
/* Mark the incore AGF state stale and revert the AGF. */
sc->sa.pag->pagf_init = 0;
memcpy(agf, &old_agf, sizeof(old_agf));
return error;
}
/* AGFL */
struct xrep_agfl {
/* Bitmap of other OWN_AG metadata blocks. */
struct xfs_bitmap agmetablocks;
/* Bitmap of free space. */
struct xfs_bitmap *freesp;
struct xfs_scrub *sc;
};
/* Record all OWN_AG (free space btree) information from the rmap data. */
STATIC int
xrep_agfl_walk_rmap(
struct xfs_btree_cur *cur,
struct xfs_rmap_irec *rec,
void *priv)
{
struct xrep_agfl *ra = priv;
xfs_fsblock_t fsb;
int error = 0;
if (xchk_should_terminate(ra->sc, &error))
return error;
/* Record all the OWN_AG blocks. */
if (rec->rm_owner == XFS_RMAP_OWN_AG) {
fsb = XFS_AGB_TO_FSB(cur->bc_mp, cur->bc_private.a.agno,
rec->rm_startblock);
error = xfs_bitmap_set(ra->freesp, fsb, rec->rm_blockcount);
if (error)
return error;
}
return xfs_bitmap_set_btcur_path(&ra->agmetablocks, cur);
}
/*
* Map out all the non-AGFL OWN_AG space in this AG so that we can deduce
* which blocks belong to the AGFL.
*
* Compute the set of old AGFL blocks by subtracting from the list of OWN_AG
* blocks the list of blocks owned by all other OWN_AG metadata (bnobt, cntbt,
* rmapbt). These are the old AGFL blocks, so return that list and the number
* of blocks we're actually going to put back on the AGFL.
*/
STATIC int
xrep_agfl_collect_blocks(
struct xfs_scrub *sc,
struct xfs_buf *agf_bp,
struct xfs_bitmap *agfl_extents,
xfs_agblock_t *flcount)
{
struct xrep_agfl ra;
struct xfs_mount *mp = sc->mp;
struct xfs_btree_cur *cur;
struct xfs_bitmap_range *br;
struct xfs_bitmap_range *n;
int error;
ra.sc = sc;
ra.freesp = agfl_extents;
xfs_bitmap_init(&ra.agmetablocks);
/* Find all space used by the free space btrees & rmapbt. */
cur = xfs_rmapbt_init_cursor(mp, sc->tp, agf_bp, sc->sa.agno);
error = xfs_rmap_query_all(cur, xrep_agfl_walk_rmap, &ra);
if (error)
goto err;
xfs_btree_del_cursor(cur, error);
/* Find all blocks currently being used by the bnobt. */
cur = xfs_allocbt_init_cursor(mp, sc->tp, agf_bp, sc->sa.agno,
XFS_BTNUM_BNO);
error = xfs_bitmap_set_btblocks(&ra.agmetablocks, cur);
if (error)
goto err;
xfs_btree_del_cursor(cur, error);
/* Find all blocks currently being used by the cntbt. */
cur = xfs_allocbt_init_cursor(mp, sc->tp, agf_bp, sc->sa.agno,
XFS_BTNUM_CNT);
error = xfs_bitmap_set_btblocks(&ra.agmetablocks, cur);
if (error)
goto err;
xfs_btree_del_cursor(cur, error);
/*
* Drop the freesp meta blocks that are in use by btrees.
* The remaining blocks /should/ be AGFL blocks.
*/
error = xfs_bitmap_disunion(agfl_extents, &ra.agmetablocks);
xfs_bitmap_destroy(&ra.agmetablocks);
if (error)
return error;
/*
* Calculate the new AGFL size. If we found more blocks than fit in
* the AGFL we'll free them later.
*/
*flcount = 0;
for_each_xfs_bitmap_extent(br, n, agfl_extents) {
*flcount += br->len;
if (*flcount > xfs_agfl_size(mp))
break;
}
if (*flcount > xfs_agfl_size(mp))
*flcount = xfs_agfl_size(mp);
return 0;
err:
xfs_bitmap_destroy(&ra.agmetablocks);
xfs_btree_del_cursor(cur, error);
return error;
}
/* Update the AGF and reset the in-core state. */
STATIC void
xrep_agfl_update_agf(
struct xfs_scrub *sc,
struct xfs_buf *agf_bp,
xfs_agblock_t flcount)
{
struct xfs_agf *agf = XFS_BUF_TO_AGF(agf_bp);
ASSERT(flcount <= xfs_agfl_size(sc->mp));
/* Trigger fdblocks recalculation */
xfs_force_summary_recalc(sc->mp);
/* Update the AGF counters. */
if (sc->sa.pag->pagf_init)
sc->sa.pag->pagf_flcount = flcount;
agf->agf_flfirst = cpu_to_be32(0);
agf->agf_flcount = cpu_to_be32(flcount);
agf->agf_fllast = cpu_to_be32(flcount - 1);
xfs_alloc_log_agf(sc->tp, agf_bp,
XFS_AGF_FLFIRST | XFS_AGF_FLLAST | XFS_AGF_FLCOUNT);
}
/* Write out a totally new AGFL. */
STATIC void
xrep_agfl_init_header(
struct xfs_scrub *sc,
struct xfs_buf *agfl_bp,
struct xfs_bitmap *agfl_extents,
xfs_agblock_t flcount)
{
struct xfs_mount *mp = sc->mp;
__be32 *agfl_bno;
struct xfs_bitmap_range *br;
struct xfs_bitmap_range *n;
struct xfs_agfl *agfl;
xfs_agblock_t agbno;
unsigned int fl_off;
ASSERT(flcount <= xfs_agfl_size(mp));
/*
* Start rewriting the header by setting the bno[] array to
* NULLAGBLOCK, then setting AGFL header fields.
*/
agfl = XFS_BUF_TO_AGFL(agfl_bp);
memset(agfl, 0xFF, BBTOB(agfl_bp->b_length));
agfl->agfl_magicnum = cpu_to_be32(XFS_AGFL_MAGIC);
agfl->agfl_seqno = cpu_to_be32(sc->sa.agno);
uuid_copy(&agfl->agfl_uuid, &mp->m_sb.sb_meta_uuid);
/*
* Fill the AGFL with the remaining blocks. If agfl_extents has more
* blocks than fit in the AGFL, they will be freed in a subsequent
* step.
*/
fl_off = 0;
agfl_bno = XFS_BUF_TO_AGFL_BNO(mp, agfl_bp);
for_each_xfs_bitmap_extent(br, n, agfl_extents) {
agbno = XFS_FSB_TO_AGBNO(mp, br->start);
trace_xrep_agfl_insert(mp, sc->sa.agno, agbno, br->len);
while (br->len > 0 && fl_off < flcount) {
agfl_bno[fl_off] = cpu_to_be32(agbno);
fl_off++;
agbno++;
/*
* We've now used br->start by putting it in the AGFL,
* so bump br so that we don't reap the block later.
*/
br->start++;
br->len--;
}
if (br->len)
break;
list_del(&br->list);
kmem_free(br);
}
/* Write new AGFL to disk. */
xfs_trans_buf_set_type(sc->tp, agfl_bp, XFS_BLFT_AGFL_BUF);
xfs_trans_log_buf(sc->tp, agfl_bp, 0, BBTOB(agfl_bp->b_length) - 1);
}
/* Repair the AGFL. */
int
xrep_agfl(
struct xfs_scrub *sc)
{
struct xfs_owner_info oinfo;
struct xfs_bitmap agfl_extents;
struct xfs_mount *mp = sc->mp;
struct xfs_buf *agf_bp;
struct xfs_buf *agfl_bp;
xfs_agblock_t flcount;
int error;
/* We require the rmapbt to rebuild anything. */
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
return -EOPNOTSUPP;
xchk_perag_get(sc->mp, &sc->sa);
xfs_bitmap_init(&agfl_extents);
/*
* Read the AGF so that we can query the rmapbt. We hope that there's
* nothing wrong with the AGF, but all the AG header repair functions
* have this chicken-and-egg problem.
*/
error = xfs_alloc_read_agf(mp, sc->tp, sc->sa.agno, 0, &agf_bp);
if (error)
return error;
if (!agf_bp)
return -ENOMEM;
/*
* Make sure we have the AGFL buffer, as scrub might have decided it
* was corrupt after xfs_alloc_read_agfl failed with -EFSCORRUPTED.
*/
error = xfs_trans_read_buf(mp, sc->tp, mp->m_ddev_targp,
XFS_AG_DADDR(mp, sc->sa.agno, XFS_AGFL_DADDR(mp)),
XFS_FSS_TO_BB(mp, 1), 0, &agfl_bp, NULL);
if (error)
return error;
agfl_bp->b_ops = &xfs_agfl_buf_ops;
/* Gather all the extents we're going to put on the new AGFL. */
error = xrep_agfl_collect_blocks(sc, agf_bp, &agfl_extents, &flcount);
if (error)
goto err;
/*
* Update AGF and AGFL. We reset the global free block counter when
* we adjust the AGF flcount (which can fail) so avoid updating any
* buffers until we know that part works.
*/
xrep_agfl_update_agf(sc, agf_bp, flcount);
xrep_agfl_init_header(sc, agfl_bp, &agfl_extents, flcount);
/*
* Ok, the AGFL should be ready to go now. Roll the transaction to
* make the new AGFL permanent before we start using it to return
* freespace overflow to the freespace btrees.
*/
sc->sa.agf_bp = agf_bp;
sc->sa.agfl_bp = agfl_bp;
error = xrep_roll_ag_trans(sc);
if (error)
goto err;
/* Dump any AGFL overflow. */
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_AG);
return xrep_reap_extents(sc, &agfl_extents, &oinfo, XFS_AG_RESV_AGFL);
err:
xfs_bitmap_destroy(&agfl_extents);
return error;
}
/* AGI */
/*
* Offset within the xrep_find_ag_btree array for each btree type. Avoid the
* XFS_BTNUM_ names here to avoid creating a sparse array.
*/
enum {
XREP_AGI_INOBT = 0,
XREP_AGI_FINOBT,
XREP_AGI_END,
XREP_AGI_MAX
};
/*
* Given the inode btree roots described by *fab, find the roots, check them
* for sanity, and pass the root data back out via *fab.
*/
STATIC int
xrep_agi_find_btrees(
struct xfs_scrub *sc,
struct xrep_find_ag_btree *fab)
{
struct xfs_buf *agf_bp;
struct xfs_mount *mp = sc->mp;
int error;
/* Read the AGF. */
error = xfs_alloc_read_agf(mp, sc->tp, sc->sa.agno, 0, &agf_bp);
if (error)
return error;
if (!agf_bp)
return -ENOMEM;
/* Find the btree roots. */
error = xrep_find_ag_btree_roots(sc, agf_bp, fab, NULL);
if (error)
return error;
/* We must find the inobt root. */
if (!xrep_check_btree_root(sc, &fab[XREP_AGI_INOBT]))
return -EFSCORRUPTED;
/* We must find the finobt root if that feature is enabled. */
if (xfs_sb_version_hasfinobt(&mp->m_sb) &&
!xrep_check_btree_root(sc, &fab[XREP_AGI_FINOBT]))
return -EFSCORRUPTED;
return 0;
}
/*
* Reinitialize the AGI header, making an in-core copy of the old contents so
* that we know which in-core state needs to be reinitialized.
*/
STATIC void
xrep_agi_init_header(
struct xfs_scrub *sc,
struct xfs_buf *agi_bp,
struct xfs_agi *old_agi)
{
struct xfs_agi *agi = XFS_BUF_TO_AGI(agi_bp);
struct xfs_mount *mp = sc->mp;
memcpy(old_agi, agi, sizeof(*old_agi));
memset(agi, 0, BBTOB(agi_bp->b_length));
agi->agi_magicnum = cpu_to_be32(XFS_AGI_MAGIC);
agi->agi_versionnum = cpu_to_be32(XFS_AGI_VERSION);
agi->agi_seqno = cpu_to_be32(sc->sa.agno);
agi->agi_length = cpu_to_be32(xfs_ag_block_count(mp, sc->sa.agno));
agi->agi_newino = cpu_to_be32(NULLAGINO);
agi->agi_dirino = cpu_to_be32(NULLAGINO);
if (xfs_sb_version_hascrc(&mp->m_sb))
uuid_copy(&agi->agi_uuid, &mp->m_sb.sb_meta_uuid);
/* We don't know how to fix the unlinked list yet. */
memcpy(&agi->agi_unlinked, &old_agi->agi_unlinked,
sizeof(agi->agi_unlinked));
/* Mark the incore AGF data stale until we're done fixing things. */
ASSERT(sc->sa.pag->pagi_init);
sc->sa.pag->pagi_init = 0;
}
/* Set btree root information in an AGI. */
STATIC void
xrep_agi_set_roots(
struct xfs_scrub *sc,
struct xfs_agi *agi,
struct xrep_find_ag_btree *fab)
{
agi->agi_root = cpu_to_be32(fab[XREP_AGI_INOBT].root);
agi->agi_level = cpu_to_be32(fab[XREP_AGI_INOBT].height);
if (xfs_sb_version_hasfinobt(&sc->mp->m_sb)) {
agi->agi_free_root = cpu_to_be32(fab[XREP_AGI_FINOBT].root);
agi->agi_free_level = cpu_to_be32(fab[XREP_AGI_FINOBT].height);
}
}
/* Update the AGI counters. */
STATIC int
xrep_agi_calc_from_btrees(
struct xfs_scrub *sc,
struct xfs_buf *agi_bp)
{
struct xfs_btree_cur *cur;
struct xfs_agi *agi = XFS_BUF_TO_AGI(agi_bp);
struct xfs_mount *mp = sc->mp;
xfs_agino_t count;
xfs_agino_t freecount;
int error;
cur = xfs_inobt_init_cursor(mp, sc->tp, agi_bp, sc->sa.agno,
XFS_BTNUM_INO);
error = xfs_ialloc_count_inodes(cur, &count, &freecount);
if (error)
goto err;
xfs_btree_del_cursor(cur, error);
agi->agi_count = cpu_to_be32(count);
agi->agi_freecount = cpu_to_be32(freecount);
return 0;
err:
xfs_btree_del_cursor(cur, error);
return error;
}
/* Trigger reinitialization of the in-core data. */
STATIC int
xrep_agi_commit_new(
struct xfs_scrub *sc,
struct xfs_buf *agi_bp)
{
struct xfs_perag *pag;
struct xfs_agi *agi = XFS_BUF_TO_AGI(agi_bp);
/* Trigger inode count recalculation */
xfs_force_summary_recalc(sc->mp);
/* Write this to disk. */
xfs_trans_buf_set_type(sc->tp, agi_bp, XFS_BLFT_AGI_BUF);
xfs_trans_log_buf(sc->tp, agi_bp, 0, BBTOB(agi_bp->b_length) - 1);
/* Now reinitialize the in-core counters if necessary. */
pag = sc->sa.pag;
pag->pagi_count = be32_to_cpu(agi->agi_count);
pag->pagi_freecount = be32_to_cpu(agi->agi_freecount);
pag->pagi_init = 1;
return 0;
}
/* Repair the AGI. */
int
xrep_agi(
struct xfs_scrub *sc)
{
struct xrep_find_ag_btree fab[XREP_AGI_MAX] = {
[XREP_AGI_INOBT] = {
.rmap_owner = XFS_RMAP_OWN_INOBT,
.buf_ops = &xfs_inobt_buf_ops,
.magic = XFS_IBT_CRC_MAGIC,
},
[XREP_AGI_FINOBT] = {
.rmap_owner = XFS_RMAP_OWN_INOBT,
.buf_ops = &xfs_inobt_buf_ops,
.magic = XFS_FIBT_CRC_MAGIC,
},
[XREP_AGI_END] = {
.buf_ops = NULL
},
};
struct xfs_agi old_agi;
struct xfs_mount *mp = sc->mp;
struct xfs_buf *agi_bp;
struct xfs_agi *agi;
int error;
/* We require the rmapbt to rebuild anything. */
if (!xfs_sb_version_hasrmapbt(&mp->m_sb))
return -EOPNOTSUPP;
xchk_perag_get(sc->mp, &sc->sa);
/*
* Make sure we have the AGI buffer, as scrub might have decided it
* was corrupt after xfs_ialloc_read_agi failed with -EFSCORRUPTED.
*/
error = xfs_trans_read_buf(mp, sc->tp, mp->m_ddev_targp,
XFS_AG_DADDR(mp, sc->sa.agno, XFS_AGI_DADDR(mp)),
XFS_FSS_TO_BB(mp, 1), 0, &agi_bp, NULL);
if (error)
return error;
agi_bp->b_ops = &xfs_agi_buf_ops;
agi = XFS_BUF_TO_AGI(agi_bp);
/* Find the AGI btree roots. */
error = xrep_agi_find_btrees(sc, fab);
if (error)
return error;
/* Start rewriting the header and implant the btrees we found. */
xrep_agi_init_header(sc, agi_bp, &old_agi);
xrep_agi_set_roots(sc, agi, fab);
error = xrep_agi_calc_from_btrees(sc, agi_bp);
if (error)
goto out_revert;
/* Reinitialize in-core state. */
return xrep_agi_commit_new(sc, agi_bp);
out_revert:
/* Mark the incore AGI state stale and revert the AGI. */
sc->sa.pag->pagi_init = 0;
memcpy(agi, &old_agi, sizeof(old_agi));
return error;
}

Просмотреть файл

@ -28,11 +28,11 @@
* Set us up to scrub free space btrees.
*/
int
xfs_scrub_setup_ag_allocbt(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_ag_allocbt(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
return xfs_scrub_setup_ag_btree(sc, ip, false);
return xchk_setup_ag_btree(sc, ip, false);
}
/* Free space btree scrubber. */
@ -41,71 +41,71 @@ xfs_scrub_setup_ag_allocbt(
* bnobt/cntbt record, respectively.
*/
STATIC void
xfs_scrub_allocbt_xref_other(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
xchk_allocbt_xref_other(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
{
struct xfs_btree_cur **pcur;
xfs_agblock_t fbno;
xfs_extlen_t flen;
int has_otherrec;
int error;
struct xfs_btree_cur **pcur;
xfs_agblock_t fbno;
xfs_extlen_t flen;
int has_otherrec;
int error;
if (sc->sm->sm_type == XFS_SCRUB_TYPE_BNOBT)
pcur = &sc->sa.cnt_cur;
else
pcur = &sc->sa.bno_cur;
if (!*pcur || xfs_scrub_skip_xref(sc->sm))
if (!*pcur || xchk_skip_xref(sc->sm))
return;
error = xfs_alloc_lookup_le(*pcur, agbno, len, &has_otherrec);
if (!xfs_scrub_should_check_xref(sc, &error, pcur))
if (!xchk_should_check_xref(sc, &error, pcur))
return;
if (!has_otherrec) {
xfs_scrub_btree_xref_set_corrupt(sc, *pcur, 0);
xchk_btree_xref_set_corrupt(sc, *pcur, 0);
return;
}
error = xfs_alloc_get_rec(*pcur, &fbno, &flen, &has_otherrec);
if (!xfs_scrub_should_check_xref(sc, &error, pcur))
if (!xchk_should_check_xref(sc, &error, pcur))
return;
if (!has_otherrec) {
xfs_scrub_btree_xref_set_corrupt(sc, *pcur, 0);
xchk_btree_xref_set_corrupt(sc, *pcur, 0);
return;
}
if (fbno != agbno || flen != len)
xfs_scrub_btree_xref_set_corrupt(sc, *pcur, 0);
xchk_btree_xref_set_corrupt(sc, *pcur, 0);
}
/* Cross-reference with the other btrees. */
STATIC void
xfs_scrub_allocbt_xref(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
xchk_allocbt_xref(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
{
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return;
xfs_scrub_allocbt_xref_other(sc, agbno, len);
xfs_scrub_xref_is_not_inode_chunk(sc, agbno, len);
xfs_scrub_xref_has_no_owner(sc, agbno, len);
xfs_scrub_xref_is_not_shared(sc, agbno, len);
xchk_allocbt_xref_other(sc, agbno, len);
xchk_xref_is_not_inode_chunk(sc, agbno, len);
xchk_xref_has_no_owner(sc, agbno, len);
xchk_xref_is_not_shared(sc, agbno, len);
}
/* Scrub a bnobt/cntbt record. */
STATIC int
xfs_scrub_allocbt_rec(
struct xfs_scrub_btree *bs,
union xfs_btree_rec *rec)
xchk_allocbt_rec(
struct xchk_btree *bs,
union xfs_btree_rec *rec)
{
struct xfs_mount *mp = bs->cur->bc_mp;
xfs_agnumber_t agno = bs->cur->bc_private.a.agno;
xfs_agblock_t bno;
xfs_extlen_t len;
int error = 0;
struct xfs_mount *mp = bs->cur->bc_mp;
xfs_agnumber_t agno = bs->cur->bc_private.a.agno;
xfs_agblock_t bno;
xfs_extlen_t len;
int error = 0;
bno = be32_to_cpu(rec->alloc.ar_startblock);
len = be32_to_cpu(rec->alloc.ar_blockcount);
@ -113,57 +113,57 @@ xfs_scrub_allocbt_rec(
if (bno + len <= bno ||
!xfs_verify_agbno(mp, agno, bno) ||
!xfs_verify_agbno(mp, agno, bno + len - 1))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
xfs_scrub_allocbt_xref(bs->sc, bno, len);
xchk_allocbt_xref(bs->sc, bno, len);
return error;
}
/* Scrub the freespace btrees for some AG. */
STATIC int
xfs_scrub_allocbt(
struct xfs_scrub_context *sc,
xfs_btnum_t which)
xchk_allocbt(
struct xfs_scrub *sc,
xfs_btnum_t which)
{
struct xfs_owner_info oinfo;
struct xfs_btree_cur *cur;
struct xfs_owner_info oinfo;
struct xfs_btree_cur *cur;
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_AG);
cur = which == XFS_BTNUM_BNO ? sc->sa.bno_cur : sc->sa.cnt_cur;
return xfs_scrub_btree(sc, cur, xfs_scrub_allocbt_rec, &oinfo, NULL);
return xchk_btree(sc, cur, xchk_allocbt_rec, &oinfo, NULL);
}
int
xfs_scrub_bnobt(
struct xfs_scrub_context *sc)
xchk_bnobt(
struct xfs_scrub *sc)
{
return xfs_scrub_allocbt(sc, XFS_BTNUM_BNO);
return xchk_allocbt(sc, XFS_BTNUM_BNO);
}
int
xfs_scrub_cntbt(
struct xfs_scrub_context *sc)
xchk_cntbt(
struct xfs_scrub *sc)
{
return xfs_scrub_allocbt(sc, XFS_BTNUM_CNT);
return xchk_allocbt(sc, XFS_BTNUM_CNT);
}
/* xref check that the extent is not free */
void
xfs_scrub_xref_is_used_space(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
xchk_xref_is_used_space(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
{
bool is_freesp;
int error;
bool is_freesp;
int error;
if (!sc->sa.bno_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.bno_cur || xchk_skip_xref(sc->sm))
return;
error = xfs_alloc_has_record(sc->sa.bno_cur, agbno, len, &is_freesp);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.bno_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.bno_cur))
return;
if (is_freesp)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.bno_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.bno_cur, 0);
}

Просмотреть файл

@ -32,11 +32,11 @@
/* Set us up to scrub an inode's extended attributes. */
int
xfs_scrub_setup_xattr(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_xattr(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
size_t sz;
size_t sz;
/*
* Allocate the buffer without the inode lock held. We need enough
@ -50,14 +50,14 @@ xfs_scrub_setup_xattr(
if (!sc->buf)
return -ENOMEM;
return xfs_scrub_setup_inode_contents(sc, ip, 0);
return xchk_setup_inode_contents(sc, ip, 0);
}
/* Extended Attributes */
struct xfs_scrub_xattr {
struct xchk_xattr {
struct xfs_attr_list_context context;
struct xfs_scrub_context *sc;
struct xfs_scrub *sc;
};
/*
@ -69,22 +69,22 @@ struct xfs_scrub_xattr {
* or if we get more or less data than we expected.
*/
static void
xfs_scrub_xattr_listent(
xchk_xattr_listent(
struct xfs_attr_list_context *context,
int flags,
unsigned char *name,
int namelen,
int valuelen)
{
struct xfs_scrub_xattr *sx;
struct xchk_xattr *sx;
struct xfs_da_args args = { NULL };
int error = 0;
sx = container_of(context, struct xfs_scrub_xattr, context);
sx = container_of(context, struct xchk_xattr, context);
if (flags & XFS_ATTR_INCOMPLETE) {
/* Incomplete attr key, just mark the inode for preening. */
xfs_scrub_ino_set_preen(sx->sc, context->dp->i_ino);
xchk_ino_set_preen(sx->sc, context->dp->i_ino);
return;
}
@ -106,11 +106,11 @@ xfs_scrub_xattr_listent(
error = xfs_attr_get_ilocked(context->dp, &args);
if (error == -EEXIST)
error = 0;
if (!xfs_scrub_fblock_process_error(sx->sc, XFS_ATTR_FORK, args.blkno,
if (!xchk_fblock_process_error(sx->sc, XFS_ATTR_FORK, args.blkno,
&error))
goto fail_xref;
if (args.valuelen != valuelen)
xfs_scrub_fblock_set_corrupt(sx->sc, XFS_ATTR_FORK,
xchk_fblock_set_corrupt(sx->sc, XFS_ATTR_FORK,
args.blkno);
fail_xref:
if (sx->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
@ -126,14 +126,14 @@ fail_xref:
* the smallest address
*/
STATIC bool
xfs_scrub_xattr_set_map(
struct xfs_scrub_context *sc,
unsigned long *map,
unsigned int start,
unsigned int len)
xchk_xattr_set_map(
struct xfs_scrub *sc,
unsigned long *map,
unsigned int start,
unsigned int len)
{
unsigned int mapsize = sc->mp->m_attr_geo->blksize;
bool ret = true;
unsigned int mapsize = sc->mp->m_attr_geo->blksize;
bool ret = true;
if (start >= mapsize)
return false;
@ -154,8 +154,8 @@ xfs_scrub_xattr_set_map(
* attr freemap has problems or points to used space.
*/
STATIC bool
xfs_scrub_xattr_check_freemap(
struct xfs_scrub_context *sc,
xchk_xattr_check_freemap(
struct xfs_scrub *sc,
unsigned long *map,
struct xfs_attr3_icleaf_hdr *leafhdr)
{
@ -168,7 +168,7 @@ xfs_scrub_xattr_check_freemap(
freemap = (unsigned long *)sc->buf + BITS_TO_LONGS(mapsize);
bitmap_zero(freemap, mapsize);
for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) {
if (!xfs_scrub_xattr_set_map(sc, freemap,
if (!xchk_xattr_set_map(sc, freemap,
leafhdr->freemap[i].base,
leafhdr->freemap[i].size))
return false;
@ -184,8 +184,8 @@ xfs_scrub_xattr_check_freemap(
* Returns the number of bytes used for the name/value data.
*/
STATIC void
xfs_scrub_xattr_entry(
struct xfs_scrub_da_btree *ds,
xchk_xattr_entry(
struct xchk_da_btree *ds,
int level,
char *buf_end,
struct xfs_attr_leafblock *leaf,
@ -204,17 +204,17 @@ xfs_scrub_xattr_entry(
unsigned int namesize;
if (ent->pad2 != 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
/* Hash values in order? */
if (be32_to_cpu(ent->hashval) < *last_hashval)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
*last_hashval = be32_to_cpu(ent->hashval);
nameidx = be16_to_cpu(ent->nameidx);
if (nameidx < leafhdr->firstused ||
nameidx >= mp->m_attr_geo->blksize) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
return;
}
@ -225,27 +225,27 @@ xfs_scrub_xattr_entry(
be16_to_cpu(lentry->valuelen));
name_end = (char *)lentry + namesize;
if (lentry->namelen == 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
} else {
rentry = xfs_attr3_leaf_name_remote(leaf, idx);
namesize = xfs_attr_leaf_entsize_remote(rentry->namelen);
name_end = (char *)rentry + namesize;
if (rentry->namelen == 0 || rentry->valueblk == 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
}
if (name_end > buf_end)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
if (!xfs_scrub_xattr_set_map(ds->sc, usedmap, nameidx, namesize))
xfs_scrub_da_set_corrupt(ds, level);
if (!xchk_xattr_set_map(ds->sc, usedmap, nameidx, namesize))
xchk_da_set_corrupt(ds, level);
if (!(ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
*usedbytes += namesize;
}
/* Scrub an attribute leaf. */
STATIC int
xfs_scrub_xattr_block(
struct xfs_scrub_da_btree *ds,
xchk_xattr_block(
struct xchk_da_btree *ds,
int level)
{
struct xfs_attr3_icleaf_hdr leafhdr;
@ -275,10 +275,10 @@ xfs_scrub_xattr_block(
if (leaf->hdr.pad1 != 0 || leaf->hdr.pad2 != 0 ||
leaf->hdr.info.hdr.pad != 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
} else {
if (leaf->hdr.pad1 != 0 || leaf->hdr.info.pad != 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
}
/* Check the leaf header */
@ -286,44 +286,44 @@ xfs_scrub_xattr_block(
hdrsize = xfs_attr3_leaf_hdr_size(leaf);
if (leafhdr.usedbytes > mp->m_attr_geo->blksize)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
if (leafhdr.firstused > mp->m_attr_geo->blksize)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
if (leafhdr.firstused < hdrsize)
xfs_scrub_da_set_corrupt(ds, level);
if (!xfs_scrub_xattr_set_map(ds->sc, usedmap, 0, hdrsize))
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
if (!xchk_xattr_set_map(ds->sc, usedmap, 0, hdrsize))
xchk_da_set_corrupt(ds, level);
if (ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out;
entries = xfs_attr3_leaf_entryp(leaf);
if ((char *)&entries[leafhdr.count] > (char *)leaf + leafhdr.firstused)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
buf_end = (char *)bp->b_addr + mp->m_attr_geo->blksize;
for (i = 0, ent = entries; i < leafhdr.count; ent++, i++) {
/* Mark the leaf entry itself. */
off = (char *)ent - (char *)leaf;
if (!xfs_scrub_xattr_set_map(ds->sc, usedmap, off,
if (!xchk_xattr_set_map(ds->sc, usedmap, off,
sizeof(xfs_attr_leaf_entry_t))) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out;
}
/* Check the entry and nameval. */
xfs_scrub_xattr_entry(ds, level, buf_end, leaf, &leafhdr,
xchk_xattr_entry(ds, level, buf_end, leaf, &leafhdr,
usedmap, ent, i, &usedbytes, &last_hashval);
if (ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out;
}
if (!xfs_scrub_xattr_check_freemap(ds->sc, usedmap, &leafhdr))
xfs_scrub_da_set_corrupt(ds, level);
if (!xchk_xattr_check_freemap(ds->sc, usedmap, &leafhdr))
xchk_da_set_corrupt(ds, level);
if (leafhdr.usedbytes != usedbytes)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
out:
return 0;
@ -331,8 +331,8 @@ out:
/* Scrub a attribute btree record. */
STATIC int
xfs_scrub_xattr_rec(
struct xfs_scrub_da_btree *ds,
xchk_xattr_rec(
struct xchk_da_btree *ds,
int level,
void *rec)
{
@ -352,14 +352,14 @@ xfs_scrub_xattr_rec(
blk = &ds->state->path.blk[level];
/* Check the whole block, if necessary. */
error = xfs_scrub_xattr_block(ds, level);
error = xchk_xattr_block(ds, level);
if (error)
goto out;
if (ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out;
/* Check the hash of the entry. */
error = xfs_scrub_da_btree_hash(ds, level, &ent->hashval);
error = xchk_da_btree_hash(ds, level, &ent->hashval);
if (error)
goto out;
@ -368,7 +368,7 @@ xfs_scrub_xattr_rec(
hdrsize = xfs_attr3_leaf_hdr_size(bp->b_addr);
nameidx = be16_to_cpu(ent->nameidx);
if (nameidx < hdrsize || nameidx >= mp->m_attr_geo->blksize) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out;
}
@ -377,12 +377,12 @@ xfs_scrub_xattr_rec(
badflags = ~(XFS_ATTR_LOCAL | XFS_ATTR_ROOT | XFS_ATTR_SECURE |
XFS_ATTR_INCOMPLETE);
if ((ent->flags & badflags) != 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
if (ent->flags & XFS_ATTR_LOCAL) {
lentry = (struct xfs_attr_leaf_name_local *)
(((char *)bp->b_addr) + nameidx);
if (lentry->namelen <= 0) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out;
}
calc_hash = xfs_da_hashname(lentry->nameval, lentry->namelen);
@ -390,13 +390,13 @@ xfs_scrub_xattr_rec(
rentry = (struct xfs_attr_leaf_name_remote *)
(((char *)bp->b_addr) + nameidx);
if (rentry->namelen <= 0) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out;
}
calc_hash = xfs_da_hashname(rentry->name, rentry->namelen);
}
if (calc_hash != hash)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
out:
return error;
@ -404,10 +404,10 @@ out:
/* Scrub the extended attribute metadata. */
int
xfs_scrub_xattr(
struct xfs_scrub_context *sc)
xchk_xattr(
struct xfs_scrub *sc)
{
struct xfs_scrub_xattr sx;
struct xchk_xattr sx;
struct attrlist_cursor_kern cursor = { 0 };
xfs_dablk_t last_checked = -1U;
int error = 0;
@ -417,7 +417,7 @@ xfs_scrub_xattr(
memset(&sx, 0, sizeof(sx));
/* Check attribute tree structure */
error = xfs_scrub_da_btree(sc, XFS_ATTR_FORK, xfs_scrub_xattr_rec,
error = xchk_da_btree(sc, XFS_ATTR_FORK, xchk_xattr_rec,
&last_checked);
if (error)
goto out;
@ -429,7 +429,7 @@ xfs_scrub_xattr(
sx.context.dp = sc->ip;
sx.context.cursor = &cursor;
sx.context.resynch = 1;
sx.context.put_listent = xfs_scrub_xattr_listent;
sx.context.put_listent = xchk_xattr_listent;
sx.context.tp = sc->tp;
sx.context.flags = ATTR_INCOMPLETE;
sx.sc = sc;
@ -438,7 +438,7 @@ xfs_scrub_xattr(
* Look up every xattr in this file by name.
*
* Use the backend implementation of xfs_attr_list to call
* xfs_scrub_xattr_listent on every attribute key in this inode.
* xchk_xattr_listent on every attribute key in this inode.
* In other words, we use the same iterator/callback mechanism
* that listattr uses to scrub extended attributes, though in our
* _listent function, we check the value of the attribute.
@ -451,7 +451,7 @@ xfs_scrub_xattr(
* locking order.
*/
error = xfs_attr_list_int_ilocked(&sx.context);
if (!xfs_scrub_fblock_process_error(sc, XFS_ATTR_FORK, 0, &error))
if (!xchk_fblock_process_error(sc, XFS_ATTR_FORK, 0, &error))
goto out;
out:
return error;

303
fs/xfs/scrub/bitmap.c Normal file
Просмотреть файл

@ -0,0 +1,303 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright (C) 2018 Oracle. All Rights Reserved.
* Author: Darrick J. Wong <darrick.wong@oracle.com>
*/
#include "xfs.h"
#include "xfs_fs.h"
#include "xfs_shared.h"
#include "xfs_format.h"
#include "xfs_trans_resv.h"
#include "xfs_mount.h"
#include "xfs_btree.h"
#include "scrub/xfs_scrub.h"
#include "scrub/scrub.h"
#include "scrub/common.h"
#include "scrub/trace.h"
#include "scrub/repair.h"
#include "scrub/bitmap.h"
/*
* Set a range of this bitmap. Caller must ensure the range is not set.
*
* This is the logical equivalent of bitmap |= mask(start, len).
*/
int
xfs_bitmap_set(
struct xfs_bitmap *bitmap,
uint64_t start,
uint64_t len)
{
struct xfs_bitmap_range *bmr;
bmr = kmem_alloc(sizeof(struct xfs_bitmap_range), KM_MAYFAIL);
if (!bmr)
return -ENOMEM;
INIT_LIST_HEAD(&bmr->list);
bmr->start = start;
bmr->len = len;
list_add_tail(&bmr->list, &bitmap->list);
return 0;
}
/* Free everything related to this bitmap. */
void
xfs_bitmap_destroy(
struct xfs_bitmap *bitmap)
{
struct xfs_bitmap_range *bmr;
struct xfs_bitmap_range *n;
for_each_xfs_bitmap_extent(bmr, n, bitmap) {
list_del(&bmr->list);
kmem_free(bmr);
}
}
/* Set up a per-AG block bitmap. */
void
xfs_bitmap_init(
struct xfs_bitmap *bitmap)
{
INIT_LIST_HEAD(&bitmap->list);
}
/* Compare two btree extents. */
static int
xfs_bitmap_range_cmp(
void *priv,
struct list_head *a,
struct list_head *b)
{
struct xfs_bitmap_range *ap;
struct xfs_bitmap_range *bp;
ap = container_of(a, struct xfs_bitmap_range, list);
bp = container_of(b, struct xfs_bitmap_range, list);
if (ap->start > bp->start)
return 1;
if (ap->start < bp->start)
return -1;
return 0;
}
/*
* Remove all the blocks mentioned in @sub from the extents in @bitmap.
*
* The intent is that callers will iterate the rmapbt for all of its records
* for a given owner to generate @bitmap; and iterate all the blocks of the
* metadata structures that are not being rebuilt and have the same rmapbt
* owner to generate @sub. This routine subtracts all the extents
* mentioned in sub from all the extents linked in @bitmap, which leaves
* @bitmap as the list of blocks that are not accounted for, which we assume
* are the dead blocks of the old metadata structure. The blocks mentioned in
* @bitmap can be reaped.
*
* This is the logical equivalent of bitmap &= ~sub.
*/
#define LEFT_ALIGNED (1 << 0)
#define RIGHT_ALIGNED (1 << 1)
int
xfs_bitmap_disunion(
struct xfs_bitmap *bitmap,
struct xfs_bitmap *sub)
{
struct list_head *lp;
struct xfs_bitmap_range *br;
struct xfs_bitmap_range *new_br;
struct xfs_bitmap_range *sub_br;
uint64_t sub_start;
uint64_t sub_len;
int state;
int error = 0;
if (list_empty(&bitmap->list) || list_empty(&sub->list))
return 0;
ASSERT(!list_empty(&sub->list));
list_sort(NULL, &bitmap->list, xfs_bitmap_range_cmp);
list_sort(NULL, &sub->list, xfs_bitmap_range_cmp);
/*
* Now that we've sorted both lists, we iterate bitmap once, rolling
* forward through sub and/or bitmap as necessary until we find an
* overlap or reach the end of either list. We do not reset lp to the
* head of bitmap nor do we reset sub_br to the head of sub. The
* list traversal is similar to merge sort, but we're deleting
* instead. In this manner we avoid O(n^2) operations.
*/
sub_br = list_first_entry(&sub->list, struct xfs_bitmap_range,
list);
lp = bitmap->list.next;
while (lp != &bitmap->list) {
br = list_entry(lp, struct xfs_bitmap_range, list);
/*
* Advance sub_br and/or br until we find a pair that
* intersect or we run out of extents.
*/
while (sub_br->start + sub_br->len <= br->start) {
if (list_is_last(&sub_br->list, &sub->list))
goto out;
sub_br = list_next_entry(sub_br, list);
}
if (sub_br->start >= br->start + br->len) {
lp = lp->next;
continue;
}
/* trim sub_br to fit the extent we have */
sub_start = sub_br->start;
sub_len = sub_br->len;
if (sub_br->start < br->start) {
sub_len -= br->start - sub_br->start;
sub_start = br->start;
}
if (sub_len > br->len)
sub_len = br->len;
state = 0;
if (sub_start == br->start)
state |= LEFT_ALIGNED;
if (sub_start + sub_len == br->start + br->len)
state |= RIGHT_ALIGNED;
switch (state) {
case LEFT_ALIGNED:
/* Coincides with only the left. */
br->start += sub_len;
br->len -= sub_len;
break;
case RIGHT_ALIGNED:
/* Coincides with only the right. */
br->len -= sub_len;
lp = lp->next;
break;
case LEFT_ALIGNED | RIGHT_ALIGNED:
/* Total overlap, just delete ex. */
lp = lp->next;
list_del(&br->list);
kmem_free(br);
break;
case 0:
/*
* Deleting from the middle: add the new right extent
* and then shrink the left extent.
*/
new_br = kmem_alloc(sizeof(struct xfs_bitmap_range),
KM_MAYFAIL);
if (!new_br) {
error = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&new_br->list);
new_br->start = sub_start + sub_len;
new_br->len = br->start + br->len - new_br->start;
list_add(&new_br->list, &br->list);
br->len = sub_start - br->start;
lp = lp->next;
break;
default:
ASSERT(0);
break;
}
}
out:
return error;
}
#undef LEFT_ALIGNED
#undef RIGHT_ALIGNED
/*
* Record all btree blocks seen while iterating all records of a btree.
*
* We know that the btree query_all function starts at the left edge and walks
* towards the right edge of the tree. Therefore, we know that we can walk up
* the btree cursor towards the root; if the pointer for a given level points
* to the first record/key in that block, we haven't seen this block before;
* and therefore we need to remember that we saw this block in the btree.
*
* So if our btree is:
*
* 4
* / | \
* 1 2 3
*
* Pretend for this example that each leaf block has 100 btree records. For
* the first btree record, we'll observe that bc_ptrs[0] == 1, so we record
* that we saw block 1. Then we observe that bc_ptrs[1] == 1, so we record
* block 4. The list is [1, 4].
*
* For the second btree record, we see that bc_ptrs[0] == 2, so we exit the
* loop. The list remains [1, 4].
*
* For the 101st btree record, we've moved onto leaf block 2. Now
* bc_ptrs[0] == 1 again, so we record that we saw block 2. We see that
* bc_ptrs[1] == 2, so we exit the loop. The list is now [1, 4, 2].
*
* For the 102nd record, bc_ptrs[0] == 2, so we continue.
*
* For the 201st record, we've moved on to leaf block 3. bc_ptrs[0] == 1, so
* we add 3 to the list. Now it is [1, 4, 2, 3].
*
* For the 300th record we just exit, with the list being [1, 4, 2, 3].
*/
/*
* Record all the buffers pointed to by the btree cursor. Callers already
* engaged in a btree walk should call this function to capture the list of
* blocks going from the leaf towards the root.
*/
int
xfs_bitmap_set_btcur_path(
struct xfs_bitmap *bitmap,
struct xfs_btree_cur *cur)
{
struct xfs_buf *bp;
xfs_fsblock_t fsb;
int i;
int error;
for (i = 0; i < cur->bc_nlevels && cur->bc_ptrs[i] == 1; i++) {
xfs_btree_get_block(cur, i, &bp);
if (!bp)
continue;
fsb = XFS_DADDR_TO_FSB(cur->bc_mp, bp->b_bn);
error = xfs_bitmap_set(bitmap, fsb, 1);
if (error)
return error;
}
return 0;
}
/* Collect a btree's block in the bitmap. */
STATIC int
xfs_bitmap_collect_btblock(
struct xfs_btree_cur *cur,
int level,
void *priv)
{
struct xfs_bitmap *bitmap = priv;
struct xfs_buf *bp;
xfs_fsblock_t fsbno;
xfs_btree_get_block(cur, level, &bp);
if (!bp)
return 0;
fsbno = XFS_DADDR_TO_FSB(cur->bc_mp, bp->b_bn);
return xfs_bitmap_set(bitmap, fsbno, 1);
}
/* Walk the btree and mark the bitmap wherever a btree block is found. */
int
xfs_bitmap_set_btblocks(
struct xfs_bitmap *bitmap,
struct xfs_btree_cur *cur)
{
return xfs_btree_visit_blocks(cur, xfs_bitmap_collect_btblock, bitmap);
}

36
fs/xfs/scrub/bitmap.h Normal file
Просмотреть файл

@ -0,0 +1,36 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright (C) 2018 Oracle. All Rights Reserved.
* Author: Darrick J. Wong <darrick.wong@oracle.com>
*/
#ifndef __XFS_SCRUB_BITMAP_H__
#define __XFS_SCRUB_BITMAP_H__
struct xfs_bitmap_range {
struct list_head list;
uint64_t start;
uint64_t len;
};
struct xfs_bitmap {
struct list_head list;
};
void xfs_bitmap_init(struct xfs_bitmap *bitmap);
void xfs_bitmap_destroy(struct xfs_bitmap *bitmap);
#define for_each_xfs_bitmap_extent(bex, n, bitmap) \
list_for_each_entry_safe((bex), (n), &(bitmap)->list, list)
#define for_each_xfs_bitmap_block(b, bex, n, bitmap) \
list_for_each_entry_safe((bex), (n), &(bitmap)->list, list) \
for ((b) = bex->start; (b) < bex->start + bex->len; (b)++)
int xfs_bitmap_set(struct xfs_bitmap *bitmap, uint64_t start, uint64_t len);
int xfs_bitmap_disunion(struct xfs_bitmap *bitmap, struct xfs_bitmap *sub);
int xfs_bitmap_set_btcur_path(struct xfs_bitmap *bitmap,
struct xfs_btree_cur *cur);
int xfs_bitmap_set_btblocks(struct xfs_bitmap *bitmap,
struct xfs_btree_cur *cur);
#endif /* __XFS_SCRUB_BITMAP_H__ */

Просмотреть файл

@ -33,13 +33,13 @@
/* Set us up with an inode's bmap. */
int
xfs_scrub_setup_inode_bmap(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_inode_bmap(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
int error;
int error;
error = xfs_scrub_get_inode(sc, ip);
error = xchk_get_inode(sc, ip);
if (error)
goto out;
@ -60,7 +60,7 @@ xfs_scrub_setup_inode_bmap(
}
/* Got the inode, lock it and we're ready to go. */
error = xfs_scrub_trans_alloc(sc, 0);
error = xchk_trans_alloc(sc, 0);
if (error)
goto out;
sc->ilock_flags |= XFS_ILOCK_EXCL;
@ -78,27 +78,27 @@ out:
* is in btree format.
*/
struct xfs_scrub_bmap_info {
struct xfs_scrub_context *sc;
xfs_fileoff_t lastoff;
bool is_rt;
bool is_shared;
int whichfork;
struct xchk_bmap_info {
struct xfs_scrub *sc;
xfs_fileoff_t lastoff;
bool is_rt;
bool is_shared;
int whichfork;
};
/* Look for a corresponding rmap for this irec. */
static inline bool
xfs_scrub_bmap_get_rmap(
struct xfs_scrub_bmap_info *info,
struct xfs_bmbt_irec *irec,
xfs_agblock_t agbno,
uint64_t owner,
struct xfs_rmap_irec *rmap)
xchk_bmap_get_rmap(
struct xchk_bmap_info *info,
struct xfs_bmbt_irec *irec,
xfs_agblock_t agbno,
uint64_t owner,
struct xfs_rmap_irec *rmap)
{
xfs_fileoff_t offset;
unsigned int rflags = 0;
int has_rmap;
int error;
xfs_fileoff_t offset;
unsigned int rflags = 0;
int has_rmap;
int error;
if (info->whichfork == XFS_ATTR_FORK)
rflags |= XFS_RMAP_ATTR_FORK;
@ -120,7 +120,7 @@ xfs_scrub_bmap_get_rmap(
if (info->is_shared) {
error = xfs_rmap_lookup_le_range(info->sc->sa.rmap_cur, agbno,
owner, offset, rflags, rmap, &has_rmap);
if (!xfs_scrub_should_check_xref(info->sc, &error,
if (!xchk_should_check_xref(info->sc, &error,
&info->sc->sa.rmap_cur))
return false;
goto out;
@ -131,36 +131,36 @@ xfs_scrub_bmap_get_rmap(
*/
error = xfs_rmap_lookup_le(info->sc->sa.rmap_cur, agbno, 0, owner,
offset, rflags, &has_rmap);
if (!xfs_scrub_should_check_xref(info->sc, &error,
if (!xchk_should_check_xref(info->sc, &error,
&info->sc->sa.rmap_cur))
return false;
if (!has_rmap)
goto out;
error = xfs_rmap_get_rec(info->sc->sa.rmap_cur, rmap, &has_rmap);
if (!xfs_scrub_should_check_xref(info->sc, &error,
if (!xchk_should_check_xref(info->sc, &error,
&info->sc->sa.rmap_cur))
return false;
out:
if (!has_rmap)
xfs_scrub_fblock_xref_set_corrupt(info->sc, info->whichfork,
xchk_fblock_xref_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
return has_rmap;
}
/* Make sure that we have rmapbt records for this extent. */
STATIC void
xfs_scrub_bmap_xref_rmap(
struct xfs_scrub_bmap_info *info,
struct xfs_bmbt_irec *irec,
xfs_agblock_t agbno)
xchk_bmap_xref_rmap(
struct xchk_bmap_info *info,
struct xfs_bmbt_irec *irec,
xfs_agblock_t agbno)
{
struct xfs_rmap_irec rmap;
unsigned long long rmap_end;
uint64_t owner;
struct xfs_rmap_irec rmap;
unsigned long long rmap_end;
uint64_t owner;
if (!info->sc->sa.rmap_cur || xfs_scrub_skip_xref(info->sc->sm))
if (!info->sc->sa.rmap_cur || xchk_skip_xref(info->sc->sm))
return;
if (info->whichfork == XFS_COW_FORK)
@ -169,14 +169,14 @@ xfs_scrub_bmap_xref_rmap(
owner = info->sc->ip->i_ino;
/* Find the rmap record for this irec. */
if (!xfs_scrub_bmap_get_rmap(info, irec, agbno, owner, &rmap))
if (!xchk_bmap_get_rmap(info, irec, agbno, owner, &rmap))
return;
/* Check the rmap. */
rmap_end = (unsigned long long)rmap.rm_startblock + rmap.rm_blockcount;
if (rmap.rm_startblock > agbno ||
agbno + irec->br_blockcount > rmap_end)
xfs_scrub_fblock_xref_set_corrupt(info->sc, info->whichfork,
xchk_fblock_xref_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
/*
@ -189,12 +189,12 @@ xfs_scrub_bmap_xref_rmap(
rmap.rm_blockcount;
if (rmap.rm_offset > irec->br_startoff ||
irec->br_startoff + irec->br_blockcount > rmap_end)
xfs_scrub_fblock_xref_set_corrupt(info->sc,
xchk_fblock_xref_set_corrupt(info->sc,
info->whichfork, irec->br_startoff);
}
if (rmap.rm_owner != owner)
xfs_scrub_fblock_xref_set_corrupt(info->sc, info->whichfork,
xchk_fblock_xref_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
/*
@ -207,46 +207,46 @@ xfs_scrub_bmap_xref_rmap(
if (owner != XFS_RMAP_OWN_COW &&
irec->br_state == XFS_EXT_UNWRITTEN &&
!(rmap.rm_flags & XFS_RMAP_UNWRITTEN))
xfs_scrub_fblock_xref_set_corrupt(info->sc, info->whichfork,
xchk_fblock_xref_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
if (info->whichfork == XFS_ATTR_FORK &&
!(rmap.rm_flags & XFS_RMAP_ATTR_FORK))
xfs_scrub_fblock_xref_set_corrupt(info->sc, info->whichfork,
xchk_fblock_xref_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
if (rmap.rm_flags & XFS_RMAP_BMBT_BLOCK)
xfs_scrub_fblock_xref_set_corrupt(info->sc, info->whichfork,
xchk_fblock_xref_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
}
/* Cross-reference a single rtdev extent record. */
STATIC void
xfs_scrub_bmap_rt_extent_xref(
struct xfs_scrub_bmap_info *info,
struct xfs_inode *ip,
struct xfs_btree_cur *cur,
struct xfs_bmbt_irec *irec)
xchk_bmap_rt_extent_xref(
struct xchk_bmap_info *info,
struct xfs_inode *ip,
struct xfs_btree_cur *cur,
struct xfs_bmbt_irec *irec)
{
if (info->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return;
xfs_scrub_xref_is_used_rt_space(info->sc, irec->br_startblock,
xchk_xref_is_used_rt_space(info->sc, irec->br_startblock,
irec->br_blockcount);
}
/* Cross-reference a single datadev extent record. */
STATIC void
xfs_scrub_bmap_extent_xref(
struct xfs_scrub_bmap_info *info,
struct xfs_inode *ip,
struct xfs_btree_cur *cur,
struct xfs_bmbt_irec *irec)
xchk_bmap_extent_xref(
struct xchk_bmap_info *info,
struct xfs_inode *ip,
struct xfs_btree_cur *cur,
struct xfs_bmbt_irec *irec)
{
struct xfs_mount *mp = info->sc->mp;
xfs_agnumber_t agno;
xfs_agblock_t agbno;
xfs_extlen_t len;
int error;
struct xfs_mount *mp = info->sc->mp;
xfs_agnumber_t agno;
xfs_agblock_t agbno;
xfs_extlen_t len;
int error;
if (info->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return;
@ -255,44 +255,44 @@ xfs_scrub_bmap_extent_xref(
agbno = XFS_FSB_TO_AGBNO(mp, irec->br_startblock);
len = irec->br_blockcount;
error = xfs_scrub_ag_init(info->sc, agno, &info->sc->sa);
if (!xfs_scrub_fblock_process_error(info->sc, info->whichfork,
error = xchk_ag_init(info->sc, agno, &info->sc->sa);
if (!xchk_fblock_process_error(info->sc, info->whichfork,
irec->br_startoff, &error))
return;
xfs_scrub_xref_is_used_space(info->sc, agbno, len);
xfs_scrub_xref_is_not_inode_chunk(info->sc, agbno, len);
xfs_scrub_bmap_xref_rmap(info, irec, agbno);
xchk_xref_is_used_space(info->sc, agbno, len);
xchk_xref_is_not_inode_chunk(info->sc, agbno, len);
xchk_bmap_xref_rmap(info, irec, agbno);
switch (info->whichfork) {
case XFS_DATA_FORK:
if (xfs_is_reflink_inode(info->sc->ip))
break;
/* fall through */
case XFS_ATTR_FORK:
xfs_scrub_xref_is_not_shared(info->sc, agbno,
xchk_xref_is_not_shared(info->sc, agbno,
irec->br_blockcount);
break;
case XFS_COW_FORK:
xfs_scrub_xref_is_cow_staging(info->sc, agbno,
xchk_xref_is_cow_staging(info->sc, agbno,
irec->br_blockcount);
break;
}
xfs_scrub_ag_free(info->sc, &info->sc->sa);
xchk_ag_free(info->sc, &info->sc->sa);
}
/* Scrub a single extent record. */
STATIC int
xfs_scrub_bmap_extent(
struct xfs_inode *ip,
struct xfs_btree_cur *cur,
struct xfs_scrub_bmap_info *info,
struct xfs_bmbt_irec *irec)
xchk_bmap_extent(
struct xfs_inode *ip,
struct xfs_btree_cur *cur,
struct xchk_bmap_info *info,
struct xfs_bmbt_irec *irec)
{
struct xfs_mount *mp = info->sc->mp;
struct xfs_buf *bp = NULL;
xfs_filblks_t end;
int error = 0;
struct xfs_mount *mp = info->sc->mp;
struct xfs_buf *bp = NULL;
xfs_filblks_t end;
int error = 0;
if (cur)
xfs_btree_get_block(cur, 0, &bp);
@ -302,12 +302,12 @@ xfs_scrub_bmap_extent(
* from the incore list, for which there is no ordering check.
*/
if (irec->br_startoff < info->lastoff)
xfs_scrub_fblock_set_corrupt(info->sc, info->whichfork,
xchk_fblock_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
/* There should never be a "hole" extent in either extent list. */
if (irec->br_startblock == HOLESTARTBLOCK)
xfs_scrub_fblock_set_corrupt(info->sc, info->whichfork,
xchk_fblock_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
/*
@ -315,40 +315,40 @@ xfs_scrub_bmap_extent(
* in-core extent scan, and we should never see these in the bmbt.
*/
if (isnullstartblock(irec->br_startblock))
xfs_scrub_fblock_set_corrupt(info->sc, info->whichfork,
xchk_fblock_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
/* Make sure the extent points to a valid place. */
if (irec->br_blockcount > MAXEXTLEN)
xfs_scrub_fblock_set_corrupt(info->sc, info->whichfork,
xchk_fblock_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
if (irec->br_startblock + irec->br_blockcount <= irec->br_startblock)
xfs_scrub_fblock_set_corrupt(info->sc, info->whichfork,
xchk_fblock_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
end = irec->br_startblock + irec->br_blockcount - 1;
if (info->is_rt &&
(!xfs_verify_rtbno(mp, irec->br_startblock) ||
!xfs_verify_rtbno(mp, end)))
xfs_scrub_fblock_set_corrupt(info->sc, info->whichfork,
xchk_fblock_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
if (!info->is_rt &&
(!xfs_verify_fsbno(mp, irec->br_startblock) ||
!xfs_verify_fsbno(mp, end) ||
XFS_FSB_TO_AGNO(mp, irec->br_startblock) !=
XFS_FSB_TO_AGNO(mp, end)))
xfs_scrub_fblock_set_corrupt(info->sc, info->whichfork,
xchk_fblock_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
/* We don't allow unwritten extents on attr forks. */
if (irec->br_state == XFS_EXT_UNWRITTEN &&
info->whichfork == XFS_ATTR_FORK)
xfs_scrub_fblock_set_corrupt(info->sc, info->whichfork,
xchk_fblock_set_corrupt(info->sc, info->whichfork,
irec->br_startoff);
if (info->is_rt)
xfs_scrub_bmap_rt_extent_xref(info, ip, cur, irec);
xchk_bmap_rt_extent_xref(info, ip, cur, irec);
else
xfs_scrub_bmap_extent_xref(info, ip, cur, irec);
xchk_bmap_extent_xref(info, ip, cur, irec);
info->lastoff = irec->br_startoff + irec->br_blockcount;
return error;
@ -356,17 +356,17 @@ xfs_scrub_bmap_extent(
/* Scrub a bmbt record. */
STATIC int
xfs_scrub_bmapbt_rec(
struct xfs_scrub_btree *bs,
union xfs_btree_rec *rec)
xchk_bmapbt_rec(
struct xchk_btree *bs,
union xfs_btree_rec *rec)
{
struct xfs_bmbt_irec irec;
struct xfs_scrub_bmap_info *info = bs->private;
struct xfs_inode *ip = bs->cur->bc_private.b.ip;
struct xfs_buf *bp = NULL;
struct xfs_btree_block *block;
uint64_t owner;
int i;
struct xfs_bmbt_irec irec;
struct xchk_bmap_info *info = bs->private;
struct xfs_inode *ip = bs->cur->bc_private.b.ip;
struct xfs_buf *bp = NULL;
struct xfs_btree_block *block;
uint64_t owner;
int i;
/*
* Check the owners of the btree blocks up to the level below
@ -378,54 +378,53 @@ xfs_scrub_bmapbt_rec(
block = xfs_btree_get_block(bs->cur, i, &bp);
owner = be64_to_cpu(block->bb_u.l.bb_owner);
if (owner != ip->i_ino)
xfs_scrub_fblock_set_corrupt(bs->sc,
xchk_fblock_set_corrupt(bs->sc,
info->whichfork, 0);
}
}
/* Set up the in-core record and scrub it. */
xfs_bmbt_disk_get_all(&rec->bmbt, &irec);
return xfs_scrub_bmap_extent(ip, bs->cur, info, &irec);
return xchk_bmap_extent(ip, bs->cur, info, &irec);
}
/* Scan the btree records. */
STATIC int
xfs_scrub_bmap_btree(
struct xfs_scrub_context *sc,
int whichfork,
struct xfs_scrub_bmap_info *info)
xchk_bmap_btree(
struct xfs_scrub *sc,
int whichfork,
struct xchk_bmap_info *info)
{
struct xfs_owner_info oinfo;
struct xfs_mount *mp = sc->mp;
struct xfs_inode *ip = sc->ip;
struct xfs_btree_cur *cur;
int error;
struct xfs_owner_info oinfo;
struct xfs_mount *mp = sc->mp;
struct xfs_inode *ip = sc->ip;
struct xfs_btree_cur *cur;
int error;
cur = xfs_bmbt_init_cursor(mp, sc->tp, ip, whichfork);
xfs_rmap_ino_bmbt_owner(&oinfo, ip->i_ino, whichfork);
error = xfs_scrub_btree(sc, cur, xfs_scrub_bmapbt_rec, &oinfo, info);
xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR :
XFS_BTREE_NOERROR);
error = xchk_btree(sc, cur, xchk_bmapbt_rec, &oinfo, info);
xfs_btree_del_cursor(cur, error);
return error;
}
struct xfs_scrub_bmap_check_rmap_info {
struct xfs_scrub_context *sc;
int whichfork;
struct xfs_iext_cursor icur;
struct xchk_bmap_check_rmap_info {
struct xfs_scrub *sc;
int whichfork;
struct xfs_iext_cursor icur;
};
/* Can we find bmaps that fit this rmap? */
STATIC int
xfs_scrub_bmap_check_rmap(
xchk_bmap_check_rmap(
struct xfs_btree_cur *cur,
struct xfs_rmap_irec *rec,
void *priv)
{
struct xfs_bmbt_irec irec;
struct xfs_scrub_bmap_check_rmap_info *sbcri = priv;
struct xchk_bmap_check_rmap_info *sbcri = priv;
struct xfs_ifork *ifp;
struct xfs_scrub_context *sc = sbcri->sc;
struct xfs_scrub *sc = sbcri->sc;
bool have_map;
/* Is this even the right fork? */
@ -440,14 +439,14 @@ xfs_scrub_bmap_check_rmap(
/* Now look up the bmbt record. */
ifp = XFS_IFORK_PTR(sc->ip, sbcri->whichfork);
if (!ifp) {
xfs_scrub_fblock_set_corrupt(sc, sbcri->whichfork,
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
rec->rm_offset);
goto out;
}
have_map = xfs_iext_lookup_extent(sc->ip, ifp, rec->rm_offset,
&sbcri->icur, &irec);
if (!have_map)
xfs_scrub_fblock_set_corrupt(sc, sbcri->whichfork,
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
rec->rm_offset);
/*
* bmap extent record lengths are constrained to 2^21 blocks in length
@ -458,14 +457,14 @@ xfs_scrub_bmap_check_rmap(
*/
while (have_map) {
if (irec.br_startoff != rec->rm_offset)
xfs_scrub_fblock_set_corrupt(sc, sbcri->whichfork,
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
rec->rm_offset);
if (irec.br_startblock != XFS_AGB_TO_FSB(sc->mp,
cur->bc_private.a.agno, rec->rm_startblock))
xfs_scrub_fblock_set_corrupt(sc, sbcri->whichfork,
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
rec->rm_offset);
if (irec.br_blockcount > rec->rm_blockcount)
xfs_scrub_fblock_set_corrupt(sc, sbcri->whichfork,
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
rec->rm_offset);
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
break;
@ -476,7 +475,7 @@ xfs_scrub_bmap_check_rmap(
break;
have_map = xfs_iext_next_extent(ifp, &sbcri->icur, &irec);
if (!have_map)
xfs_scrub_fblock_set_corrupt(sc, sbcri->whichfork,
xchk_fblock_set_corrupt(sc, sbcri->whichfork,
rec->rm_offset);
}
@ -488,12 +487,12 @@ out:
/* Make sure each rmap has a corresponding bmbt entry. */
STATIC int
xfs_scrub_bmap_check_ag_rmaps(
struct xfs_scrub_context *sc,
xchk_bmap_check_ag_rmaps(
struct xfs_scrub *sc,
int whichfork,
xfs_agnumber_t agno)
{
struct xfs_scrub_bmap_check_rmap_info sbcri;
struct xchk_bmap_check_rmap_info sbcri;
struct xfs_btree_cur *cur;
struct xfs_buf *agf;
int error;
@ -510,11 +509,11 @@ xfs_scrub_bmap_check_ag_rmaps(
sbcri.sc = sc;
sbcri.whichfork = whichfork;
error = xfs_rmap_query_all(cur, xfs_scrub_bmap_check_rmap, &sbcri);
error = xfs_rmap_query_all(cur, xchk_bmap_check_rmap, &sbcri);
if (error == XFS_BTREE_QUERY_RANGE_ABORT)
error = 0;
xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_btree_del_cursor(cur, error);
out_agf:
xfs_trans_brelse(sc->tp, agf);
return error;
@ -522,13 +521,13 @@ out_agf:
/* Make sure each rmap has a corresponding bmbt entry. */
STATIC int
xfs_scrub_bmap_check_rmaps(
struct xfs_scrub_context *sc,
int whichfork)
xchk_bmap_check_rmaps(
struct xfs_scrub *sc,
int whichfork)
{
loff_t size;
xfs_agnumber_t agno;
int error;
loff_t size;
xfs_agnumber_t agno;
int error;
if (!xfs_sb_version_hasrmapbt(&sc->mp->m_sb) ||
whichfork == XFS_COW_FORK ||
@ -562,7 +561,7 @@ xfs_scrub_bmap_check_rmaps(
return 0;
for (agno = 0; agno < sc->mp->m_sb.sb_agcount; agno++) {
error = xfs_scrub_bmap_check_ag_rmaps(sc, whichfork, agno);
error = xchk_bmap_check_ag_rmaps(sc, whichfork, agno);
if (error)
return error;
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
@ -579,18 +578,18 @@ xfs_scrub_bmap_check_rmaps(
* Then we unconditionally scan the incore extent cache.
*/
STATIC int
xfs_scrub_bmap(
struct xfs_scrub_context *sc,
int whichfork)
xchk_bmap(
struct xfs_scrub *sc,
int whichfork)
{
struct xfs_bmbt_irec irec;
struct xfs_scrub_bmap_info info = { NULL };
struct xfs_mount *mp = sc->mp;
struct xfs_inode *ip = sc->ip;
struct xfs_ifork *ifp;
xfs_fileoff_t endoff;
struct xfs_iext_cursor icur;
int error = 0;
struct xfs_bmbt_irec irec;
struct xchk_bmap_info info = { NULL };
struct xfs_mount *mp = sc->mp;
struct xfs_inode *ip = sc->ip;
struct xfs_ifork *ifp;
xfs_fileoff_t endoff;
struct xfs_iext_cursor icur;
int error = 0;
ifp = XFS_IFORK_PTR(ip, whichfork);
@ -606,7 +605,7 @@ xfs_scrub_bmap(
goto out;
/* No CoW forks on non-reflink inodes/filesystems. */
if (!xfs_is_reflink_inode(ip)) {
xfs_scrub_ino_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_set_corrupt(sc, sc->ip->i_ino);
goto out;
}
break;
@ -615,7 +614,7 @@ xfs_scrub_bmap(
goto out_check_rmap;
if (!xfs_sb_version_hasattr(&mp->m_sb) &&
!xfs_sb_version_hasattr2(&mp->m_sb))
xfs_scrub_ino_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_set_corrupt(sc, sc->ip->i_ino);
break;
default:
ASSERT(whichfork == XFS_DATA_FORK);
@ -631,22 +630,22 @@ xfs_scrub_bmap(
goto out;
case XFS_DINODE_FMT_EXTENTS:
if (!(ifp->if_flags & XFS_IFEXTENTS)) {
xfs_scrub_fblock_set_corrupt(sc, whichfork, 0);
xchk_fblock_set_corrupt(sc, whichfork, 0);
goto out;
}
break;
case XFS_DINODE_FMT_BTREE:
if (whichfork == XFS_COW_FORK) {
xfs_scrub_fblock_set_corrupt(sc, whichfork, 0);
xchk_fblock_set_corrupt(sc, whichfork, 0);
goto out;
}
error = xfs_scrub_bmap_btree(sc, whichfork, &info);
error = xchk_bmap_btree(sc, whichfork, &info);
if (error)
goto out;
break;
default:
xfs_scrub_fblock_set_corrupt(sc, whichfork, 0);
xchk_fblock_set_corrupt(sc, whichfork, 0);
goto out;
}
@ -656,37 +655,37 @@ xfs_scrub_bmap(
/* Now try to scrub the in-memory extent list. */
if (!(ifp->if_flags & XFS_IFEXTENTS)) {
error = xfs_iread_extents(sc->tp, ip, whichfork);
if (!xfs_scrub_fblock_process_error(sc, whichfork, 0, &error))
if (!xchk_fblock_process_error(sc, whichfork, 0, &error))
goto out;
}
/* Find the offset of the last extent in the mapping. */
error = xfs_bmap_last_offset(ip, &endoff, whichfork);
if (!xfs_scrub_fblock_process_error(sc, whichfork, 0, &error))
if (!xchk_fblock_process_error(sc, whichfork, 0, &error))
goto out;
/* Scrub extent records. */
info.lastoff = 0;
ifp = XFS_IFORK_PTR(ip, whichfork);
for_each_xfs_iext(ifp, &icur, &irec) {
if (xfs_scrub_should_terminate(sc, &error) ||
if (xchk_should_terminate(sc, &error) ||
(sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
break;
if (isnullstartblock(irec.br_startblock))
continue;
if (irec.br_startoff >= endoff) {
xfs_scrub_fblock_set_corrupt(sc, whichfork,
xchk_fblock_set_corrupt(sc, whichfork,
irec.br_startoff);
goto out;
}
error = xfs_scrub_bmap_extent(ip, NULL, &info, &irec);
error = xchk_bmap_extent(ip, NULL, &info, &irec);
if (error)
goto out;
}
out_check_rmap:
error = xfs_scrub_bmap_check_rmaps(sc, whichfork);
if (!xfs_scrub_fblock_xref_process_error(sc, whichfork, 0, &error))
error = xchk_bmap_check_rmaps(sc, whichfork);
if (!xchk_fblock_xref_process_error(sc, whichfork, 0, &error))
goto out;
out:
return error;
@ -694,27 +693,27 @@ out:
/* Scrub an inode's data fork. */
int
xfs_scrub_bmap_data(
struct xfs_scrub_context *sc)
xchk_bmap_data(
struct xfs_scrub *sc)
{
return xfs_scrub_bmap(sc, XFS_DATA_FORK);
return xchk_bmap(sc, XFS_DATA_FORK);
}
/* Scrub an inode's attr fork. */
int
xfs_scrub_bmap_attr(
struct xfs_scrub_context *sc)
xchk_bmap_attr(
struct xfs_scrub *sc)
{
return xfs_scrub_bmap(sc, XFS_ATTR_FORK);
return xchk_bmap(sc, XFS_ATTR_FORK);
}
/* Scrub an inode's CoW fork. */
int
xfs_scrub_bmap_cow(
struct xfs_scrub_context *sc)
xchk_bmap_cow(
struct xfs_scrub *sc)
{
if (!xfs_is_reflink_inode(sc->ip))
return -ENOENT;
return xfs_scrub_bmap(sc, XFS_COW_FORK);
return xchk_bmap(sc, XFS_COW_FORK);
}

Просмотреть файл

@ -29,13 +29,13 @@
* operational errors in common.c.
*/
static bool
__xfs_scrub_btree_process_error(
struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
int level,
int *error,
__u32 errflag,
void *ret_ip)
__xchk_btree_process_error(
struct xfs_scrub *sc,
struct xfs_btree_cur *cur,
int level,
int *error,
__u32 errflag,
void *ret_ip)
{
if (*error == 0)
return true;
@ -43,7 +43,7 @@ __xfs_scrub_btree_process_error(
switch (*error) {
case -EDEADLOCK:
/* Used to restart an op with deadlock avoidance. */
trace_xfs_scrub_deadlock_retry(sc->ip, sc->sm, *error);
trace_xchk_deadlock_retry(sc->ip, sc->sm, *error);
break;
case -EFSBADCRC:
case -EFSCORRUPTED:
@ -53,10 +53,10 @@ __xfs_scrub_btree_process_error(
/* fall through */
default:
if (cur->bc_flags & XFS_BTREE_ROOT_IN_INODE)
trace_xfs_scrub_ifork_btree_op_error(sc, cur, level,
trace_xchk_ifork_btree_op_error(sc, cur, level,
*error, ret_ip);
else
trace_xfs_scrub_btree_op_error(sc, cur, level,
trace_xchk_btree_op_error(sc, cur, level,
*error, ret_ip);
break;
}
@ -64,63 +64,63 @@ __xfs_scrub_btree_process_error(
}
bool
xfs_scrub_btree_process_error(
struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
int level,
int *error)
xchk_btree_process_error(
struct xfs_scrub *sc,
struct xfs_btree_cur *cur,
int level,
int *error)
{
return __xfs_scrub_btree_process_error(sc, cur, level, error,
return __xchk_btree_process_error(sc, cur, level, error,
XFS_SCRUB_OFLAG_CORRUPT, __return_address);
}
bool
xfs_scrub_btree_xref_process_error(
struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
int level,
int *error)
xchk_btree_xref_process_error(
struct xfs_scrub *sc,
struct xfs_btree_cur *cur,
int level,
int *error)
{
return __xfs_scrub_btree_process_error(sc, cur, level, error,
return __xchk_btree_process_error(sc, cur, level, error,
XFS_SCRUB_OFLAG_XFAIL, __return_address);
}
/* Record btree block corruption. */
static void
__xfs_scrub_btree_set_corrupt(
struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
int level,
__u32 errflag,
void *ret_ip)
__xchk_btree_set_corrupt(
struct xfs_scrub *sc,
struct xfs_btree_cur *cur,
int level,
__u32 errflag,
void *ret_ip)
{
sc->sm->sm_flags |= errflag;
if (cur->bc_flags & XFS_BTREE_ROOT_IN_INODE)
trace_xfs_scrub_ifork_btree_error(sc, cur, level,
trace_xchk_ifork_btree_error(sc, cur, level,
ret_ip);
else
trace_xfs_scrub_btree_error(sc, cur, level,
trace_xchk_btree_error(sc, cur, level,
ret_ip);
}
void
xfs_scrub_btree_set_corrupt(
struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
int level)
xchk_btree_set_corrupt(
struct xfs_scrub *sc,
struct xfs_btree_cur *cur,
int level)
{
__xfs_scrub_btree_set_corrupt(sc, cur, level, XFS_SCRUB_OFLAG_CORRUPT,
__xchk_btree_set_corrupt(sc, cur, level, XFS_SCRUB_OFLAG_CORRUPT,
__return_address);
}
void
xfs_scrub_btree_xref_set_corrupt(
struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
int level)
xchk_btree_xref_set_corrupt(
struct xfs_scrub *sc,
struct xfs_btree_cur *cur,
int level)
{
__xfs_scrub_btree_set_corrupt(sc, cur, level, XFS_SCRUB_OFLAG_XCORRUPT,
__xchk_btree_set_corrupt(sc, cur, level, XFS_SCRUB_OFLAG_XCORRUPT,
__return_address);
}
@ -129,8 +129,8 @@ xfs_scrub_btree_xref_set_corrupt(
* keys.
*/
STATIC void
xfs_scrub_btree_rec(
struct xfs_scrub_btree *bs)
xchk_btree_rec(
struct xchk_btree *bs)
{
struct xfs_btree_cur *cur = bs->cur;
union xfs_btree_rec *rec;
@ -144,11 +144,11 @@ xfs_scrub_btree_rec(
block = xfs_btree_get_block(cur, 0, &bp);
rec = xfs_btree_rec_addr(cur, cur->bc_ptrs[0], block);
trace_xfs_scrub_btree_rec(bs->sc, cur, 0);
trace_xchk_btree_rec(bs->sc, cur, 0);
/* If this isn't the first record, are they in order? */
if (!bs->firstrec && !cur->bc_ops->recs_inorder(cur, &bs->lastrec, rec))
xfs_scrub_btree_set_corrupt(bs->sc, cur, 0);
xchk_btree_set_corrupt(bs->sc, cur, 0);
bs->firstrec = false;
memcpy(&bs->lastrec, rec, cur->bc_ops->rec_len);
@ -160,7 +160,7 @@ xfs_scrub_btree_rec(
keyblock = xfs_btree_get_block(cur, 1, &bp);
keyp = xfs_btree_key_addr(cur, cur->bc_ptrs[1], keyblock);
if (cur->bc_ops->diff_two_keys(cur, &key, keyp) < 0)
xfs_scrub_btree_set_corrupt(bs->sc, cur, 1);
xchk_btree_set_corrupt(bs->sc, cur, 1);
if (!(cur->bc_flags & XFS_BTREE_OVERLAPPING))
return;
@ -169,7 +169,7 @@ xfs_scrub_btree_rec(
cur->bc_ops->init_high_key_from_rec(&hkey, rec);
keyp = xfs_btree_high_key_addr(cur, cur->bc_ptrs[1], keyblock);
if (cur->bc_ops->diff_two_keys(cur, keyp, &hkey) < 0)
xfs_scrub_btree_set_corrupt(bs->sc, cur, 1);
xchk_btree_set_corrupt(bs->sc, cur, 1);
}
/*
@ -177,8 +177,8 @@ xfs_scrub_btree_rec(
* keys.
*/
STATIC void
xfs_scrub_btree_key(
struct xfs_scrub_btree *bs,
xchk_btree_key(
struct xchk_btree *bs,
int level)
{
struct xfs_btree_cur *cur = bs->cur;
@ -191,12 +191,12 @@ xfs_scrub_btree_key(
block = xfs_btree_get_block(cur, level, &bp);
key = xfs_btree_key_addr(cur, cur->bc_ptrs[level], block);
trace_xfs_scrub_btree_key(bs->sc, cur, level);
trace_xchk_btree_key(bs->sc, cur, level);
/* If this isn't the first key, are they in order? */
if (!bs->firstkey[level] &&
!cur->bc_ops->keys_inorder(cur, &bs->lastkey[level], key))
xfs_scrub_btree_set_corrupt(bs->sc, cur, level);
xchk_btree_set_corrupt(bs->sc, cur, level);
bs->firstkey[level] = false;
memcpy(&bs->lastkey[level], key, cur->bc_ops->key_len);
@ -207,7 +207,7 @@ xfs_scrub_btree_key(
keyblock = xfs_btree_get_block(cur, level + 1, &bp);
keyp = xfs_btree_key_addr(cur, cur->bc_ptrs[level + 1], keyblock);
if (cur->bc_ops->diff_two_keys(cur, key, keyp) < 0)
xfs_scrub_btree_set_corrupt(bs->sc, cur, level);
xchk_btree_set_corrupt(bs->sc, cur, level);
if (!(cur->bc_flags & XFS_BTREE_OVERLAPPING))
return;
@ -216,7 +216,7 @@ xfs_scrub_btree_key(
key = xfs_btree_high_key_addr(cur, cur->bc_ptrs[level], block);
keyp = xfs_btree_high_key_addr(cur, cur->bc_ptrs[level + 1], keyblock);
if (cur->bc_ops->diff_two_keys(cur, keyp, key) < 0)
xfs_scrub_btree_set_corrupt(bs->sc, cur, level);
xchk_btree_set_corrupt(bs->sc, cur, level);
}
/*
@ -224,12 +224,12 @@ xfs_scrub_btree_key(
* Callers do not need to set the corrupt flag.
*/
static bool
xfs_scrub_btree_ptr_ok(
struct xfs_scrub_btree *bs,
int level,
union xfs_btree_ptr *ptr)
xchk_btree_ptr_ok(
struct xchk_btree *bs,
int level,
union xfs_btree_ptr *ptr)
{
bool res;
bool res;
/* A btree rooted in an inode has no block pointer to the root. */
if ((bs->cur->bc_flags & XFS_BTREE_ROOT_IN_INODE) &&
@ -242,29 +242,29 @@ xfs_scrub_btree_ptr_ok(
else
res = xfs_btree_check_sptr(bs->cur, be32_to_cpu(ptr->s), level);
if (!res)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, level);
xchk_btree_set_corrupt(bs->sc, bs->cur, level);
return res;
}
/* Check that a btree block's sibling matches what we expect it. */
STATIC int
xfs_scrub_btree_block_check_sibling(
struct xfs_scrub_btree *bs,
int level,
int direction,
union xfs_btree_ptr *sibling)
xchk_btree_block_check_sibling(
struct xchk_btree *bs,
int level,
int direction,
union xfs_btree_ptr *sibling)
{
struct xfs_btree_cur *cur = bs->cur;
struct xfs_btree_block *pblock;
struct xfs_buf *pbp;
struct xfs_btree_cur *ncur = NULL;
union xfs_btree_ptr *pp;
int success;
int error;
struct xfs_btree_cur *cur = bs->cur;
struct xfs_btree_block *pblock;
struct xfs_buf *pbp;
struct xfs_btree_cur *ncur = NULL;
union xfs_btree_ptr *pp;
int success;
int error;
error = xfs_btree_dup_cursor(cur, &ncur);
if (!xfs_scrub_btree_process_error(bs->sc, cur, level + 1, &error) ||
if (!xchk_btree_process_error(bs->sc, cur, level + 1, &error) ||
!ncur)
return error;
@ -278,7 +278,7 @@ xfs_scrub_btree_block_check_sibling(
else
error = xfs_btree_decrement(ncur, level + 1, &success);
if (error == 0 && success)
xfs_scrub_btree_set_corrupt(bs->sc, cur, level);
xchk_btree_set_corrupt(bs->sc, cur, level);
error = 0;
goto out;
}
@ -288,23 +288,23 @@ xfs_scrub_btree_block_check_sibling(
error = xfs_btree_increment(ncur, level + 1, &success);
else
error = xfs_btree_decrement(ncur, level + 1, &success);
if (!xfs_scrub_btree_process_error(bs->sc, cur, level + 1, &error))
if (!xchk_btree_process_error(bs->sc, cur, level + 1, &error))
goto out;
if (!success) {
xfs_scrub_btree_set_corrupt(bs->sc, cur, level + 1);
xchk_btree_set_corrupt(bs->sc, cur, level + 1);
goto out;
}
/* Compare upper level pointer to sibling pointer. */
pblock = xfs_btree_get_block(ncur, level + 1, &pbp);
pp = xfs_btree_ptr_addr(ncur, ncur->bc_ptrs[level + 1], pblock);
if (!xfs_scrub_btree_ptr_ok(bs, level + 1, pp))
if (!xchk_btree_ptr_ok(bs, level + 1, pp))
goto out;
if (pbp)
xfs_scrub_buffer_recheck(bs->sc, pbp);
xchk_buffer_recheck(bs->sc, pbp);
if (xfs_btree_diff_two_ptrs(cur, pp, sibling))
xfs_scrub_btree_set_corrupt(bs->sc, cur, level);
xchk_btree_set_corrupt(bs->sc, cur, level);
out:
xfs_btree_del_cursor(ncur, XFS_BTREE_ERROR);
return error;
@ -312,15 +312,15 @@ out:
/* Check the siblings of a btree block. */
STATIC int
xfs_scrub_btree_block_check_siblings(
struct xfs_scrub_btree *bs,
struct xfs_btree_block *block)
xchk_btree_block_check_siblings(
struct xchk_btree *bs,
struct xfs_btree_block *block)
{
struct xfs_btree_cur *cur = bs->cur;
union xfs_btree_ptr leftsib;
union xfs_btree_ptr rightsib;
int level;
int error = 0;
struct xfs_btree_cur *cur = bs->cur;
union xfs_btree_ptr leftsib;
union xfs_btree_ptr rightsib;
int level;
int error = 0;
xfs_btree_get_sibling(cur, block, &leftsib, XFS_BB_LEFTSIB);
xfs_btree_get_sibling(cur, block, &rightsib, XFS_BB_RIGHTSIB);
@ -330,7 +330,7 @@ xfs_scrub_btree_block_check_siblings(
if (level == cur->bc_nlevels - 1) {
if (!xfs_btree_ptr_is_null(cur, &leftsib) ||
!xfs_btree_ptr_is_null(cur, &rightsib))
xfs_scrub_btree_set_corrupt(bs->sc, cur, level);
xchk_btree_set_corrupt(bs->sc, cur, level);
goto out;
}
@ -339,10 +339,10 @@ xfs_scrub_btree_block_check_siblings(
* parent level pointers?
* (These function absorbs error codes for us.)
*/
error = xfs_scrub_btree_block_check_sibling(bs, level, -1, &leftsib);
error = xchk_btree_block_check_sibling(bs, level, -1, &leftsib);
if (error)
return error;
error = xfs_scrub_btree_block_check_sibling(bs, level, 1, &rightsib);
error = xchk_btree_block_check_sibling(bs, level, 1, &rightsib);
if (error)
return error;
out:
@ -360,16 +360,16 @@ struct check_owner {
* an rmap record for it.
*/
STATIC int
xfs_scrub_btree_check_block_owner(
struct xfs_scrub_btree *bs,
int level,
xfs_daddr_t daddr)
xchk_btree_check_block_owner(
struct xchk_btree *bs,
int level,
xfs_daddr_t daddr)
{
xfs_agnumber_t agno;
xfs_agblock_t agbno;
xfs_btnum_t btnum;
bool init_sa;
int error = 0;
xfs_agnumber_t agno;
xfs_agblock_t agbno;
xfs_btnum_t btnum;
bool init_sa;
int error = 0;
if (!bs->cur)
return 0;
@ -380,13 +380,13 @@ xfs_scrub_btree_check_block_owner(
init_sa = bs->cur->bc_flags & XFS_BTREE_LONG_PTRS;
if (init_sa) {
error = xfs_scrub_ag_init(bs->sc, agno, &bs->sc->sa);
if (!xfs_scrub_btree_xref_process_error(bs->sc, bs->cur,
error = xchk_ag_init(bs->sc, agno, &bs->sc->sa);
if (!xchk_btree_xref_process_error(bs->sc, bs->cur,
level, &error))
return error;
}
xfs_scrub_xref_is_used_space(bs->sc, agbno, 1);
xchk_xref_is_used_space(bs->sc, agbno, 1);
/*
* The bnobt scrubber aliases bs->cur to bs->sc->sa.bno_cur, so we
* have to nullify it (to shut down further block owner checks) if
@ -395,25 +395,25 @@ xfs_scrub_btree_check_block_owner(
if (!bs->sc->sa.bno_cur && btnum == XFS_BTNUM_BNO)
bs->cur = NULL;
xfs_scrub_xref_is_owned_by(bs->sc, agbno, 1, bs->oinfo);
xchk_xref_is_owned_by(bs->sc, agbno, 1, bs->oinfo);
if (!bs->sc->sa.rmap_cur && btnum == XFS_BTNUM_RMAP)
bs->cur = NULL;
if (init_sa)
xfs_scrub_ag_free(bs->sc, &bs->sc->sa);
xchk_ag_free(bs->sc, &bs->sc->sa);
return error;
}
/* Check the owner of a btree block. */
STATIC int
xfs_scrub_btree_check_owner(
struct xfs_scrub_btree *bs,
int level,
struct xfs_buf *bp)
xchk_btree_check_owner(
struct xchk_btree *bs,
int level,
struct xfs_buf *bp)
{
struct xfs_btree_cur *cur = bs->cur;
struct check_owner *co;
struct xfs_btree_cur *cur = bs->cur;
struct check_owner *co;
if ((cur->bc_flags & XFS_BTREE_ROOT_IN_INODE) && bp == NULL)
return 0;
@ -437,7 +437,7 @@ xfs_scrub_btree_check_owner(
return 0;
}
return xfs_scrub_btree_check_block_owner(bs, level, XFS_BUF_ADDR(bp));
return xchk_btree_check_block_owner(bs, level, XFS_BUF_ADDR(bp));
}
/*
@ -445,8 +445,8 @@ xfs_scrub_btree_check_owner(
* special blocks that don't require that.
*/
STATIC void
xfs_scrub_btree_check_minrecs(
struct xfs_scrub_btree *bs,
xchk_btree_check_minrecs(
struct xchk_btree *bs,
int level,
struct xfs_btree_block *block)
{
@ -475,7 +475,7 @@ xfs_scrub_btree_check_minrecs(
if (level >= ok_level)
return;
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, level);
xchk_btree_set_corrupt(bs->sc, bs->cur, level);
}
/*
@ -483,21 +483,21 @@ xfs_scrub_btree_check_minrecs(
* and buffer pointers (if applicable) if they're ok to use.
*/
STATIC int
xfs_scrub_btree_get_block(
struct xfs_scrub_btree *bs,
int level,
union xfs_btree_ptr *pp,
struct xfs_btree_block **pblock,
struct xfs_buf **pbp)
xchk_btree_get_block(
struct xchk_btree *bs,
int level,
union xfs_btree_ptr *pp,
struct xfs_btree_block **pblock,
struct xfs_buf **pbp)
{
void *failed_at;
int error;
xfs_failaddr_t failed_at;
int error;
*pblock = NULL;
*pbp = NULL;
error = xfs_btree_lookup_get_block(bs->cur, level, pp, pblock);
if (!xfs_scrub_btree_process_error(bs->sc, bs->cur, level, &error) ||
if (!xchk_btree_process_error(bs->sc, bs->cur, level, &error) ||
!*pblock)
return error;
@ -509,19 +509,19 @@ xfs_scrub_btree_get_block(
failed_at = __xfs_btree_check_sblock(bs->cur, *pblock,
level, *pbp);
if (failed_at) {
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, level);
xchk_btree_set_corrupt(bs->sc, bs->cur, level);
return 0;
}
if (*pbp)
xfs_scrub_buffer_recheck(bs->sc, *pbp);
xchk_buffer_recheck(bs->sc, *pbp);
xfs_scrub_btree_check_minrecs(bs, level, *pblock);
xchk_btree_check_minrecs(bs, level, *pblock);
/*
* Check the block's owner; this function absorbs error codes
* for us.
*/
error = xfs_scrub_btree_check_owner(bs, level, *pbp);
error = xchk_btree_check_owner(bs, level, *pbp);
if (error)
return error;
@ -529,7 +529,7 @@ xfs_scrub_btree_get_block(
* Check the block's siblings; this function absorbs error codes
* for us.
*/
return xfs_scrub_btree_block_check_siblings(bs, *pblock);
return xchk_btree_block_check_siblings(bs, *pblock);
}
/*
@ -537,18 +537,18 @@ xfs_scrub_btree_get_block(
* in the parent block.
*/
STATIC void
xfs_scrub_btree_block_keys(
struct xfs_scrub_btree *bs,
int level,
struct xfs_btree_block *block)
xchk_btree_block_keys(
struct xchk_btree *bs,
int level,
struct xfs_btree_block *block)
{
union xfs_btree_key block_keys;
struct xfs_btree_cur *cur = bs->cur;
union xfs_btree_key *high_bk;
union xfs_btree_key *parent_keys;
union xfs_btree_key *high_pk;
struct xfs_btree_block *parent_block;
struct xfs_buf *bp;
union xfs_btree_key block_keys;
struct xfs_btree_cur *cur = bs->cur;
union xfs_btree_key *high_bk;
union xfs_btree_key *parent_keys;
union xfs_btree_key *high_pk;
struct xfs_btree_block *parent_block;
struct xfs_buf *bp;
if (level >= cur->bc_nlevels - 1)
return;
@ -562,7 +562,7 @@ xfs_scrub_btree_block_keys(
parent_block);
if (cur->bc_ops->diff_two_keys(cur, &block_keys, parent_keys) != 0)
xfs_scrub_btree_set_corrupt(bs->sc, cur, 1);
xchk_btree_set_corrupt(bs->sc, cur, 1);
if (!(cur->bc_flags & XFS_BTREE_OVERLAPPING))
return;
@ -573,7 +573,7 @@ xfs_scrub_btree_block_keys(
parent_block);
if (cur->bc_ops->diff_two_keys(cur, high_bk, high_pk) != 0)
xfs_scrub_btree_set_corrupt(bs->sc, cur, 1);
xchk_btree_set_corrupt(bs->sc, cur, 1);
}
/*
@ -582,24 +582,24 @@ xfs_scrub_btree_block_keys(
* so that the caller can verify individual records.
*/
int
xfs_scrub_btree(
struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
xfs_scrub_btree_rec_fn scrub_fn,
struct xfs_owner_info *oinfo,
void *private)
xchk_btree(
struct xfs_scrub *sc,
struct xfs_btree_cur *cur,
xchk_btree_rec_fn scrub_fn,
struct xfs_owner_info *oinfo,
void *private)
{
struct xfs_scrub_btree bs = { NULL };
union xfs_btree_ptr ptr;
union xfs_btree_ptr *pp;
union xfs_btree_rec *recp;
struct xfs_btree_block *block;
int level;
struct xfs_buf *bp;
struct check_owner *co;
struct check_owner *n;
int i;
int error = 0;
struct xchk_btree bs = { NULL };
union xfs_btree_ptr ptr;
union xfs_btree_ptr *pp;
union xfs_btree_rec *recp;
struct xfs_btree_block *block;
int level;
struct xfs_buf *bp;
struct check_owner *co;
struct check_owner *n;
int i;
int error = 0;
/* Initialize scrub state */
bs.cur = cur;
@ -614,7 +614,7 @@ xfs_scrub_btree(
/* Don't try to check a tree with a height we can't handle. */
if (cur->bc_nlevels > XFS_BTREE_MAXLEVELS) {
xfs_scrub_btree_set_corrupt(sc, cur, 0);
xchk_btree_set_corrupt(sc, cur, 0);
goto out;
}
@ -624,9 +624,9 @@ xfs_scrub_btree(
*/
level = cur->bc_nlevels - 1;
cur->bc_ops->init_ptr_from_cur(cur, &ptr);
if (!xfs_scrub_btree_ptr_ok(&bs, cur->bc_nlevels, &ptr))
if (!xchk_btree_ptr_ok(&bs, cur->bc_nlevels, &ptr))
goto out;
error = xfs_scrub_btree_get_block(&bs, level, &ptr, &block, &bp);
error = xchk_btree_get_block(&bs, level, &ptr, &block, &bp);
if (error || !block)
goto out;
@ -639,7 +639,7 @@ xfs_scrub_btree(
/* End of leaf, pop back towards the root. */
if (cur->bc_ptrs[level] >
be16_to_cpu(block->bb_numrecs)) {
xfs_scrub_btree_block_keys(&bs, level, block);
xchk_btree_block_keys(&bs, level, block);
if (level < cur->bc_nlevels - 1)
cur->bc_ptrs[level + 1]++;
level++;
@ -647,14 +647,14 @@ xfs_scrub_btree(
}
/* Records in order for scrub? */
xfs_scrub_btree_rec(&bs);
xchk_btree_rec(&bs);
/* Call out to the record checker. */
recp = xfs_btree_rec_addr(cur, cur->bc_ptrs[0], block);
error = bs.scrub_rec(&bs, recp);
if (error)
break;
if (xfs_scrub_should_terminate(sc, &error) ||
if (xchk_should_terminate(sc, &error) ||
(sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
break;
@ -664,7 +664,7 @@ xfs_scrub_btree(
/* End of node, pop back towards the root. */
if (cur->bc_ptrs[level] > be16_to_cpu(block->bb_numrecs)) {
xfs_scrub_btree_block_keys(&bs, level, block);
xchk_btree_block_keys(&bs, level, block);
if (level < cur->bc_nlevels - 1)
cur->bc_ptrs[level + 1]++;
level++;
@ -672,16 +672,16 @@ xfs_scrub_btree(
}
/* Keys in order for scrub? */
xfs_scrub_btree_key(&bs, level);
xchk_btree_key(&bs, level);
/* Drill another level deeper. */
pp = xfs_btree_ptr_addr(cur, cur->bc_ptrs[level], block);
if (!xfs_scrub_btree_ptr_ok(&bs, level, pp)) {
if (!xchk_btree_ptr_ok(&bs, level, pp)) {
cur->bc_ptrs[level]++;
continue;
}
level--;
error = xfs_scrub_btree_get_block(&bs, level, pp, &block, &bp);
error = xchk_btree_get_block(&bs, level, pp, &block, &bp);
if (error || !block)
goto out;
@ -692,7 +692,7 @@ out:
/* Process deferred owner checks on btree blocks. */
list_for_each_entry_safe(co, n, &bs.to_check, list) {
if (!error && bs.cur)
error = xfs_scrub_btree_check_block_owner(&bs,
error = xchk_btree_check_block_owner(&bs,
co->level, co->daddr);
list_del(&co->list);
kmem_free(co);

Просмотреть файл

@ -9,44 +9,43 @@
/* btree scrub */
/* Check for btree operation errors. */
bool xfs_scrub_btree_process_error(struct xfs_scrub_context *sc,
bool xchk_btree_process_error(struct xfs_scrub *sc,
struct xfs_btree_cur *cur, int level, int *error);
/* Check for btree xref operation errors. */
bool xfs_scrub_btree_xref_process_error(struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur, int level,
int *error);
bool xchk_btree_xref_process_error(struct xfs_scrub *sc,
struct xfs_btree_cur *cur, int level, int *error);
/* Check for btree corruption. */
void xfs_scrub_btree_set_corrupt(struct xfs_scrub_context *sc,
void xchk_btree_set_corrupt(struct xfs_scrub *sc,
struct xfs_btree_cur *cur, int level);
/* Check for btree xref discrepancies. */
void xfs_scrub_btree_xref_set_corrupt(struct xfs_scrub_context *sc,
void xchk_btree_xref_set_corrupt(struct xfs_scrub *sc,
struct xfs_btree_cur *cur, int level);
struct xfs_scrub_btree;
typedef int (*xfs_scrub_btree_rec_fn)(
struct xfs_scrub_btree *bs,
struct xchk_btree;
typedef int (*xchk_btree_rec_fn)(
struct xchk_btree *bs,
union xfs_btree_rec *rec);
struct xfs_scrub_btree {
struct xchk_btree {
/* caller-provided scrub state */
struct xfs_scrub_context *sc;
struct xfs_btree_cur *cur;
xfs_scrub_btree_rec_fn scrub_rec;
struct xfs_owner_info *oinfo;
void *private;
struct xfs_scrub *sc;
struct xfs_btree_cur *cur;
xchk_btree_rec_fn scrub_rec;
struct xfs_owner_info *oinfo;
void *private;
/* internal scrub state */
union xfs_btree_rec lastrec;
bool firstrec;
union xfs_btree_key lastkey[XFS_BTREE_MAXLEVELS];
bool firstkey[XFS_BTREE_MAXLEVELS];
struct list_head to_check;
union xfs_btree_rec lastrec;
bool firstrec;
union xfs_btree_key lastkey[XFS_BTREE_MAXLEVELS];
bool firstkey[XFS_BTREE_MAXLEVELS];
struct list_head to_check;
};
int xfs_scrub_btree(struct xfs_scrub_context *sc, struct xfs_btree_cur *cur,
xfs_scrub_btree_rec_fn scrub_fn,
struct xfs_owner_info *oinfo, void *private);
int xchk_btree(struct xfs_scrub *sc, struct xfs_btree_cur *cur,
xchk_btree_rec_fn scrub_fn, struct xfs_owner_info *oinfo,
void *private);
#endif /* __XFS_SCRUB_BTREE_H__ */

Просмотреть файл

@ -68,20 +68,20 @@
/* Check for operational errors. */
static bool
__xfs_scrub_process_error(
struct xfs_scrub_context *sc,
xfs_agnumber_t agno,
xfs_agblock_t bno,
int *error,
__u32 errflag,
void *ret_ip)
__xchk_process_error(
struct xfs_scrub *sc,
xfs_agnumber_t agno,
xfs_agblock_t bno,
int *error,
__u32 errflag,
void *ret_ip)
{
switch (*error) {
case 0:
return true;
case -EDEADLOCK:
/* Used to restart an op with deadlock avoidance. */
trace_xfs_scrub_deadlock_retry(sc->ip, sc->sm, *error);
trace_xchk_deadlock_retry(sc->ip, sc->sm, *error);
break;
case -EFSBADCRC:
case -EFSCORRUPTED:
@ -90,7 +90,7 @@ __xfs_scrub_process_error(
*error = 0;
/* fall through */
default:
trace_xfs_scrub_op_error(sc, agno, bno, *error,
trace_xchk_op_error(sc, agno, bno, *error,
ret_ip);
break;
}
@ -98,43 +98,43 @@ __xfs_scrub_process_error(
}
bool
xfs_scrub_process_error(
struct xfs_scrub_context *sc,
xfs_agnumber_t agno,
xfs_agblock_t bno,
int *error)
xchk_process_error(
struct xfs_scrub *sc,
xfs_agnumber_t agno,
xfs_agblock_t bno,
int *error)
{
return __xfs_scrub_process_error(sc, agno, bno, error,
return __xchk_process_error(sc, agno, bno, error,
XFS_SCRUB_OFLAG_CORRUPT, __return_address);
}
bool
xfs_scrub_xref_process_error(
struct xfs_scrub_context *sc,
xfs_agnumber_t agno,
xfs_agblock_t bno,
int *error)
xchk_xref_process_error(
struct xfs_scrub *sc,
xfs_agnumber_t agno,
xfs_agblock_t bno,
int *error)
{
return __xfs_scrub_process_error(sc, agno, bno, error,
return __xchk_process_error(sc, agno, bno, error,
XFS_SCRUB_OFLAG_XFAIL, __return_address);
}
/* Check for operational errors for a file offset. */
static bool
__xfs_scrub_fblock_process_error(
struct xfs_scrub_context *sc,
int whichfork,
xfs_fileoff_t offset,
int *error,
__u32 errflag,
void *ret_ip)
__xchk_fblock_process_error(
struct xfs_scrub *sc,
int whichfork,
xfs_fileoff_t offset,
int *error,
__u32 errflag,
void *ret_ip)
{
switch (*error) {
case 0:
return true;
case -EDEADLOCK:
/* Used to restart an op with deadlock avoidance. */
trace_xfs_scrub_deadlock_retry(sc->ip, sc->sm, *error);
trace_xchk_deadlock_retry(sc->ip, sc->sm, *error);
break;
case -EFSBADCRC:
case -EFSCORRUPTED:
@ -143,7 +143,7 @@ __xfs_scrub_fblock_process_error(
*error = 0;
/* fall through */
default:
trace_xfs_scrub_file_op_error(sc, whichfork, offset, *error,
trace_xchk_file_op_error(sc, whichfork, offset, *error,
ret_ip);
break;
}
@ -151,24 +151,24 @@ __xfs_scrub_fblock_process_error(
}
bool
xfs_scrub_fblock_process_error(
struct xfs_scrub_context *sc,
int whichfork,
xfs_fileoff_t offset,
int *error)
xchk_fblock_process_error(
struct xfs_scrub *sc,
int whichfork,
xfs_fileoff_t offset,
int *error)
{
return __xfs_scrub_fblock_process_error(sc, whichfork, offset, error,
return __xchk_fblock_process_error(sc, whichfork, offset, error,
XFS_SCRUB_OFLAG_CORRUPT, __return_address);
}
bool
xfs_scrub_fblock_xref_process_error(
struct xfs_scrub_context *sc,
int whichfork,
xfs_fileoff_t offset,
int *error)
xchk_fblock_xref_process_error(
struct xfs_scrub *sc,
int whichfork,
xfs_fileoff_t offset,
int *error)
{
return __xfs_scrub_fblock_process_error(sc, whichfork, offset, error,
return __xchk_fblock_process_error(sc, whichfork, offset, error,
XFS_SCRUB_OFLAG_XFAIL, __return_address);
}
@ -186,12 +186,12 @@ xfs_scrub_fblock_xref_process_error(
/* Record a block which could be optimized. */
void
xfs_scrub_block_set_preen(
struct xfs_scrub_context *sc,
struct xfs_buf *bp)
xchk_block_set_preen(
struct xfs_scrub *sc,
struct xfs_buf *bp)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_PREEN;
trace_xfs_scrub_block_preen(sc, bp->b_bn, __return_address);
trace_xchk_block_preen(sc, bp->b_bn, __return_address);
}
/*
@ -200,32 +200,32 @@ xfs_scrub_block_set_preen(
* the block location of the inode record itself.
*/
void
xfs_scrub_ino_set_preen(
struct xfs_scrub_context *sc,
xfs_ino_t ino)
xchk_ino_set_preen(
struct xfs_scrub *sc,
xfs_ino_t ino)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_PREEN;
trace_xfs_scrub_ino_preen(sc, ino, __return_address);
trace_xchk_ino_preen(sc, ino, __return_address);
}
/* Record a corrupt block. */
void
xfs_scrub_block_set_corrupt(
struct xfs_scrub_context *sc,
struct xfs_buf *bp)
xchk_block_set_corrupt(
struct xfs_scrub *sc,
struct xfs_buf *bp)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
trace_xfs_scrub_block_error(sc, bp->b_bn, __return_address);
trace_xchk_block_error(sc, bp->b_bn, __return_address);
}
/* Record a corruption while cross-referencing. */
void
xfs_scrub_block_xref_set_corrupt(
struct xfs_scrub_context *sc,
struct xfs_buf *bp)
xchk_block_xref_set_corrupt(
struct xfs_scrub *sc,
struct xfs_buf *bp)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_XCORRUPT;
trace_xfs_scrub_block_error(sc, bp->b_bn, __return_address);
trace_xchk_block_error(sc, bp->b_bn, __return_address);
}
/*
@ -234,44 +234,44 @@ xfs_scrub_block_xref_set_corrupt(
* inode record itself.
*/
void
xfs_scrub_ino_set_corrupt(
struct xfs_scrub_context *sc,
xfs_ino_t ino)
xchk_ino_set_corrupt(
struct xfs_scrub *sc,
xfs_ino_t ino)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
trace_xfs_scrub_ino_error(sc, ino, __return_address);
trace_xchk_ino_error(sc, ino, __return_address);
}
/* Record a corruption while cross-referencing with an inode. */
void
xfs_scrub_ino_xref_set_corrupt(
struct xfs_scrub_context *sc,
xfs_ino_t ino)
xchk_ino_xref_set_corrupt(
struct xfs_scrub *sc,
xfs_ino_t ino)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_XCORRUPT;
trace_xfs_scrub_ino_error(sc, ino, __return_address);
trace_xchk_ino_error(sc, ino, __return_address);
}
/* Record corruption in a block indexed by a file fork. */
void
xfs_scrub_fblock_set_corrupt(
struct xfs_scrub_context *sc,
int whichfork,
xfs_fileoff_t offset)
xchk_fblock_set_corrupt(
struct xfs_scrub *sc,
int whichfork,
xfs_fileoff_t offset)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
trace_xfs_scrub_fblock_error(sc, whichfork, offset, __return_address);
trace_xchk_fblock_error(sc, whichfork, offset, __return_address);
}
/* Record a corruption while cross-referencing a fork block. */
void
xfs_scrub_fblock_xref_set_corrupt(
struct xfs_scrub_context *sc,
int whichfork,
xfs_fileoff_t offset)
xchk_fblock_xref_set_corrupt(
struct xfs_scrub *sc,
int whichfork,
xfs_fileoff_t offset)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_XCORRUPT;
trace_xfs_scrub_fblock_error(sc, whichfork, offset, __return_address);
trace_xchk_fblock_error(sc, whichfork, offset, __return_address);
}
/*
@ -279,32 +279,32 @@ xfs_scrub_fblock_xref_set_corrupt(
* incorrect.
*/
void
xfs_scrub_ino_set_warning(
struct xfs_scrub_context *sc,
xfs_ino_t ino)
xchk_ino_set_warning(
struct xfs_scrub *sc,
xfs_ino_t ino)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_WARNING;
trace_xfs_scrub_ino_warning(sc, ino, __return_address);
trace_xchk_ino_warning(sc, ino, __return_address);
}
/* Warn about a block indexed by a file fork that needs review. */
void
xfs_scrub_fblock_set_warning(
struct xfs_scrub_context *sc,
int whichfork,
xfs_fileoff_t offset)
xchk_fblock_set_warning(
struct xfs_scrub *sc,
int whichfork,
xfs_fileoff_t offset)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_WARNING;
trace_xfs_scrub_fblock_warning(sc, whichfork, offset, __return_address);
trace_xchk_fblock_warning(sc, whichfork, offset, __return_address);
}
/* Signal an incomplete scrub. */
void
xfs_scrub_set_incomplete(
struct xfs_scrub_context *sc)
xchk_set_incomplete(
struct xfs_scrub *sc)
{
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_INCOMPLETE;
trace_xfs_scrub_incomplete(sc, __return_address);
trace_xchk_incomplete(sc, __return_address);
}
/*
@ -312,20 +312,20 @@ xfs_scrub_set_incomplete(
* at least according to the reverse mapping data.
*/
struct xfs_scrub_rmap_ownedby_info {
struct xchk_rmap_ownedby_info {
struct xfs_owner_info *oinfo;
xfs_filblks_t *blocks;
};
STATIC int
xfs_scrub_count_rmap_ownedby_irec(
struct xfs_btree_cur *cur,
struct xfs_rmap_irec *rec,
void *priv)
xchk_count_rmap_ownedby_irec(
struct xfs_btree_cur *cur,
struct xfs_rmap_irec *rec,
void *priv)
{
struct xfs_scrub_rmap_ownedby_info *sroi = priv;
bool irec_attr;
bool oinfo_attr;
struct xchk_rmap_ownedby_info *sroi = priv;
bool irec_attr;
bool oinfo_attr;
irec_attr = rec->rm_flags & XFS_RMAP_ATTR_FORK;
oinfo_attr = sroi->oinfo->oi_flags & XFS_OWNER_INFO_ATTR_FORK;
@ -344,19 +344,19 @@ xfs_scrub_count_rmap_ownedby_irec(
* The caller should pass us an rmapbt cursor.
*/
int
xfs_scrub_count_rmap_ownedby_ag(
struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
struct xfs_owner_info *oinfo,
xfs_filblks_t *blocks)
xchk_count_rmap_ownedby_ag(
struct xfs_scrub *sc,
struct xfs_btree_cur *cur,
struct xfs_owner_info *oinfo,
xfs_filblks_t *blocks)
{
struct xfs_scrub_rmap_ownedby_info sroi;
struct xchk_rmap_ownedby_info sroi;
sroi.oinfo = oinfo;
*blocks = 0;
sroi.blocks = blocks;
return xfs_rmap_query_all(cur, xfs_scrub_count_rmap_ownedby_irec,
return xfs_rmap_query_all(cur, xchk_count_rmap_ownedby_irec,
&sroi);
}
@ -371,8 +371,8 @@ xfs_scrub_count_rmap_ownedby_ag(
/* Decide if we want to return an AG header read failure. */
static inline bool
want_ag_read_header_failure(
struct xfs_scrub_context *sc,
unsigned int type)
struct xfs_scrub *sc,
unsigned int type)
{
/* Return all AG header read failures when scanning btrees. */
if (sc->sm->sm_type != XFS_SCRUB_TYPE_AGF &&
@ -392,20 +392,20 @@ want_ag_read_header_failure(
/*
* Grab all the headers for an AG.
*
* The headers should be released by xfs_scrub_ag_free, but as a fail
* The headers should be released by xchk_ag_free, but as a fail
* safe we attach all the buffers we grab to the scrub transaction so
* they'll all be freed when we cancel it.
*/
int
xfs_scrub_ag_read_headers(
struct xfs_scrub_context *sc,
xfs_agnumber_t agno,
struct xfs_buf **agi,
struct xfs_buf **agf,
struct xfs_buf **agfl)
xchk_ag_read_headers(
struct xfs_scrub *sc,
xfs_agnumber_t agno,
struct xfs_buf **agi,
struct xfs_buf **agf,
struct xfs_buf **agfl)
{
struct xfs_mount *mp = sc->mp;
int error;
struct xfs_mount *mp = sc->mp;
int error;
error = xfs_ialloc_read_agi(mp, sc->tp, agno, agi);
if (error && want_ag_read_header_failure(sc, XFS_SCRUB_TYPE_AGI))
@ -425,8 +425,8 @@ out:
/* Release all the AG btree cursors. */
void
xfs_scrub_ag_btcur_free(
struct xfs_scrub_ag *sa)
xchk_ag_btcur_free(
struct xchk_ag *sa)
{
if (sa->refc_cur)
xfs_btree_del_cursor(sa->refc_cur, XFS_BTREE_ERROR);
@ -451,12 +451,12 @@ xfs_scrub_ag_btcur_free(
/* Initialize all the btree cursors for an AG. */
int
xfs_scrub_ag_btcur_init(
struct xfs_scrub_context *sc,
struct xfs_scrub_ag *sa)
xchk_ag_btcur_init(
struct xfs_scrub *sc,
struct xchk_ag *sa)
{
struct xfs_mount *mp = sc->mp;
xfs_agnumber_t agno = sa->agno;
struct xfs_mount *mp = sc->mp;
xfs_agnumber_t agno = sa->agno;
if (sa->agf_bp) {
/* Set up a bnobt cursor for cross-referencing. */
@ -499,7 +499,7 @@ xfs_scrub_ag_btcur_init(
/* Set up a refcountbt cursor for cross-referencing. */
if (sa->agf_bp && xfs_sb_version_hasreflink(&mp->m_sb)) {
sa->refc_cur = xfs_refcountbt_init_cursor(mp, sc->tp,
sa->agf_bp, agno, NULL);
sa->agf_bp, agno);
if (!sa->refc_cur)
goto err;
}
@ -511,11 +511,11 @@ err:
/* Release the AG header context and btree cursors. */
void
xfs_scrub_ag_free(
struct xfs_scrub_context *sc,
struct xfs_scrub_ag *sa)
xchk_ag_free(
struct xfs_scrub *sc,
struct xchk_ag *sa)
{
xfs_scrub_ag_btcur_free(sa);
xchk_ag_btcur_free(sa);
if (sa->agfl_bp) {
xfs_trans_brelse(sc->tp, sa->agfl_bp);
sa->agfl_bp = NULL;
@ -543,30 +543,30 @@ xfs_scrub_ag_free(
* transaction ourselves.
*/
int
xfs_scrub_ag_init(
struct xfs_scrub_context *sc,
xfs_agnumber_t agno,
struct xfs_scrub_ag *sa)
xchk_ag_init(
struct xfs_scrub *sc,
xfs_agnumber_t agno,
struct xchk_ag *sa)
{
int error;
int error;
sa->agno = agno;
error = xfs_scrub_ag_read_headers(sc, agno, &sa->agi_bp,
error = xchk_ag_read_headers(sc, agno, &sa->agi_bp,
&sa->agf_bp, &sa->agfl_bp);
if (error)
return error;
return xfs_scrub_ag_btcur_init(sc, sa);
return xchk_ag_btcur_init(sc, sa);
}
/*
* Grab the per-ag structure if we haven't already gotten it. Teardown of the
* xfs_scrub_ag will release it for us.
* xchk_ag will release it for us.
*/
void
xfs_scrub_perag_get(
xchk_perag_get(
struct xfs_mount *mp,
struct xfs_scrub_ag *sa)
struct xchk_ag *sa)
{
if (!sa->pag)
sa->pag = xfs_perag_get(mp, sa->agno);
@ -585,9 +585,9 @@ xfs_scrub_perag_get(
* the metadata object.
*/
int
xfs_scrub_trans_alloc(
struct xfs_scrub_context *sc,
uint resblks)
xchk_trans_alloc(
struct xfs_scrub *sc,
uint resblks)
{
if (sc->sm->sm_flags & XFS_SCRUB_IFLAG_REPAIR)
return xfs_trans_alloc(sc->mp, &M_RES(sc->mp)->tr_itruncate,
@ -598,25 +598,25 @@ xfs_scrub_trans_alloc(
/* Set us up with a transaction and an empty context. */
int
xfs_scrub_setup_fs(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_fs(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
uint resblks;
uint resblks;
resblks = xfs_repair_calc_ag_resblks(sc);
return xfs_scrub_trans_alloc(sc, resblks);
resblks = xrep_calc_ag_resblks(sc);
return xchk_trans_alloc(sc, resblks);
}
/* Set us up with AG headers and btree cursors. */
int
xfs_scrub_setup_ag_btree(
struct xfs_scrub_context *sc,
struct xfs_inode *ip,
bool force_log)
xchk_setup_ag_btree(
struct xfs_scrub *sc,
struct xfs_inode *ip,
bool force_log)
{
struct xfs_mount *mp = sc->mp;
int error;
struct xfs_mount *mp = sc->mp;
int error;
/*
* If the caller asks us to checkpont the log, do so. This
@ -625,21 +625,21 @@ xfs_scrub_setup_ag_btree(
* document why they need to do so.
*/
if (force_log) {
error = xfs_scrub_checkpoint_log(mp);
error = xchk_checkpoint_log(mp);
if (error)
return error;
}
error = xfs_scrub_setup_fs(sc, ip);
error = xchk_setup_fs(sc, ip);
if (error)
return error;
return xfs_scrub_ag_init(sc, sc->sm->sm_agno, &sc->sa);
return xchk_ag_init(sc, sc->sm->sm_agno, &sc->sa);
}
/* Push everything out of the log onto disk. */
int
xfs_scrub_checkpoint_log(
xchk_checkpoint_log(
struct xfs_mount *mp)
{
int error;
@ -657,14 +657,14 @@ xfs_scrub_checkpoint_log(
* The inode is not locked.
*/
int
xfs_scrub_get_inode(
struct xfs_scrub_context *sc,
struct xfs_inode *ip_in)
xchk_get_inode(
struct xfs_scrub *sc,
struct xfs_inode *ip_in)
{
struct xfs_imap imap;
struct xfs_mount *mp = sc->mp;
struct xfs_inode *ip = NULL;
int error;
struct xfs_imap imap;
struct xfs_mount *mp = sc->mp;
struct xfs_inode *ip = NULL;
int error;
/* We want to scan the inode we already had opened. */
if (sc->sm->sm_ino == 0 || sc->sm->sm_ino == ip_in->i_ino) {
@ -704,14 +704,14 @@ xfs_scrub_get_inode(
error = -EFSCORRUPTED;
/* fall through */
default:
trace_xfs_scrub_op_error(sc,
trace_xchk_op_error(sc,
XFS_INO_TO_AGNO(mp, sc->sm->sm_ino),
XFS_INO_TO_AGBNO(mp, sc->sm->sm_ino),
error, __return_address);
return error;
}
if (VFS_I(ip)->i_generation != sc->sm->sm_gen) {
iput(VFS_I(ip));
xfs_irele(ip);
return -ENOENT;
}
@ -721,21 +721,21 @@ xfs_scrub_get_inode(
/* Set us up to scrub a file's contents. */
int
xfs_scrub_setup_inode_contents(
struct xfs_scrub_context *sc,
struct xfs_inode *ip,
unsigned int resblks)
xchk_setup_inode_contents(
struct xfs_scrub *sc,
struct xfs_inode *ip,
unsigned int resblks)
{
int error;
int error;
error = xfs_scrub_get_inode(sc, ip);
error = xchk_get_inode(sc, ip);
if (error)
return error;
/* Got the inode, lock it and we're ready to go. */
sc->ilock_flags = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL;
xfs_ilock(sc->ip, sc->ilock_flags);
error = xfs_scrub_trans_alloc(sc, resblks);
error = xchk_trans_alloc(sc, resblks);
if (error)
goto out;
sc->ilock_flags |= XFS_ILOCK_EXCL;
@ -752,13 +752,13 @@ out:
* the cursor and skip the check.
*/
bool
xfs_scrub_should_check_xref(
struct xfs_scrub_context *sc,
int *error,
struct xfs_btree_cur **curpp)
xchk_should_check_xref(
struct xfs_scrub *sc,
int *error,
struct xfs_btree_cur **curpp)
{
/* No point in xref if we already know we're corrupt. */
if (xfs_scrub_skip_xref(sc->sm))
if (xchk_skip_xref(sc->sm))
return false;
if (*error == 0)
@ -775,7 +775,7 @@ xfs_scrub_should_check_xref(
}
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_XFAIL;
trace_xfs_scrub_xref_error(sc, *error, __return_address);
trace_xchk_xref_error(sc, *error, __return_address);
/*
* Errors encountered during cross-referencing with another
@ -787,25 +787,25 @@ xfs_scrub_should_check_xref(
/* Run the structure verifiers on in-memory buffers to detect bad memory. */
void
xfs_scrub_buffer_recheck(
struct xfs_scrub_context *sc,
struct xfs_buf *bp)
xchk_buffer_recheck(
struct xfs_scrub *sc,
struct xfs_buf *bp)
{
xfs_failaddr_t fa;
xfs_failaddr_t fa;
if (bp->b_ops == NULL) {
xfs_scrub_block_set_corrupt(sc, bp);
xchk_block_set_corrupt(sc, bp);
return;
}
if (bp->b_ops->verify_struct == NULL) {
xfs_scrub_set_incomplete(sc);
xchk_set_incomplete(sc);
return;
}
fa = bp->b_ops->verify_struct(bp);
if (!fa)
return;
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
trace_xfs_scrub_block_error(sc, bp->b_bn, fa);
trace_xchk_block_error(sc, bp->b_bn, fa);
}
/*
@ -813,38 +813,38 @@ xfs_scrub_buffer_recheck(
* pointed to by sc->ip and the ILOCK must be held.
*/
int
xfs_scrub_metadata_inode_forks(
struct xfs_scrub_context *sc)
xchk_metadata_inode_forks(
struct xfs_scrub *sc)
{
__u32 smtype;
bool shared;
int error;
__u32 smtype;
bool shared;
int error;
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return 0;
/* Metadata inodes don't live on the rt device. */
if (sc->ip->i_d.di_flags & XFS_DIFLAG_REALTIME) {
xfs_scrub_ino_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_set_corrupt(sc, sc->ip->i_ino);
return 0;
}
/* They should never participate in reflink. */
if (xfs_is_reflink_inode(sc->ip)) {
xfs_scrub_ino_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_set_corrupt(sc, sc->ip->i_ino);
return 0;
}
/* They also should never have extended attributes. */
if (xfs_inode_hasattr(sc->ip)) {
xfs_scrub_ino_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_set_corrupt(sc, sc->ip->i_ino);
return 0;
}
/* Invoke the data fork scrubber. */
smtype = sc->sm->sm_type;
sc->sm->sm_type = XFS_SCRUB_TYPE_BMBTD;
error = xfs_scrub_bmap_data(sc);
error = xchk_bmap_data(sc);
sc->sm->sm_type = smtype;
if (error || (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
return error;
@ -853,11 +853,11 @@ xfs_scrub_metadata_inode_forks(
if (xfs_sb_version_hasreflink(&sc->mp->m_sb)) {
error = xfs_reflink_inode_has_shared_extents(sc->tp, sc->ip,
&shared);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, 0,
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, 0,
&error))
return error;
if (shared)
xfs_scrub_ino_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_set_corrupt(sc, sc->ip->i_ino);
}
return error;
@ -871,7 +871,7 @@ xfs_scrub_metadata_inode_forks(
* we can't.
*/
int
xfs_scrub_ilock_inverted(
xchk_ilock_inverted(
struct xfs_inode *ip,
uint lock_mode)
{

Просмотреть файл

@ -12,8 +12,8 @@
* Note that we're careful not to make any judgements about *error.
*/
static inline bool
xfs_scrub_should_terminate(
struct xfs_scrub_context *sc,
xchk_should_terminate(
struct xfs_scrub *sc,
int *error)
{
if (fatal_signal_pending(current)) {
@ -24,121 +24,118 @@ xfs_scrub_should_terminate(
return false;
}
int xfs_scrub_trans_alloc(struct xfs_scrub_context *sc, uint resblks);
bool xfs_scrub_process_error(struct xfs_scrub_context *sc, xfs_agnumber_t agno,
int xchk_trans_alloc(struct xfs_scrub *sc, uint resblks);
bool xchk_process_error(struct xfs_scrub *sc, xfs_agnumber_t agno,
xfs_agblock_t bno, int *error);
bool xfs_scrub_fblock_process_error(struct xfs_scrub_context *sc, int whichfork,
bool xchk_fblock_process_error(struct xfs_scrub *sc, int whichfork,
xfs_fileoff_t offset, int *error);
bool xfs_scrub_xref_process_error(struct xfs_scrub_context *sc,
bool xchk_xref_process_error(struct xfs_scrub *sc,
xfs_agnumber_t agno, xfs_agblock_t bno, int *error);
bool xfs_scrub_fblock_xref_process_error(struct xfs_scrub_context *sc,
bool xchk_fblock_xref_process_error(struct xfs_scrub *sc,
int whichfork, xfs_fileoff_t offset, int *error);
void xfs_scrub_block_set_preen(struct xfs_scrub_context *sc,
void xchk_block_set_preen(struct xfs_scrub *sc,
struct xfs_buf *bp);
void xfs_scrub_ino_set_preen(struct xfs_scrub_context *sc, xfs_ino_t ino);
void xchk_ino_set_preen(struct xfs_scrub *sc, xfs_ino_t ino);
void xfs_scrub_block_set_corrupt(struct xfs_scrub_context *sc,
void xchk_block_set_corrupt(struct xfs_scrub *sc,
struct xfs_buf *bp);
void xfs_scrub_ino_set_corrupt(struct xfs_scrub_context *sc, xfs_ino_t ino);
void xfs_scrub_fblock_set_corrupt(struct xfs_scrub_context *sc, int whichfork,
void xchk_ino_set_corrupt(struct xfs_scrub *sc, xfs_ino_t ino);
void xchk_fblock_set_corrupt(struct xfs_scrub *sc, int whichfork,
xfs_fileoff_t offset);
void xfs_scrub_block_xref_set_corrupt(struct xfs_scrub_context *sc,
void xchk_block_xref_set_corrupt(struct xfs_scrub *sc,
struct xfs_buf *bp);
void xfs_scrub_ino_xref_set_corrupt(struct xfs_scrub_context *sc,
void xchk_ino_xref_set_corrupt(struct xfs_scrub *sc,
xfs_ino_t ino);
void xfs_scrub_fblock_xref_set_corrupt(struct xfs_scrub_context *sc,
void xchk_fblock_xref_set_corrupt(struct xfs_scrub *sc,
int whichfork, xfs_fileoff_t offset);
void xfs_scrub_ino_set_warning(struct xfs_scrub_context *sc, xfs_ino_t ino);
void xfs_scrub_fblock_set_warning(struct xfs_scrub_context *sc, int whichfork,
void xchk_ino_set_warning(struct xfs_scrub *sc, xfs_ino_t ino);
void xchk_fblock_set_warning(struct xfs_scrub *sc, int whichfork,
xfs_fileoff_t offset);
void xfs_scrub_set_incomplete(struct xfs_scrub_context *sc);
int xfs_scrub_checkpoint_log(struct xfs_mount *mp);
void xchk_set_incomplete(struct xfs_scrub *sc);
int xchk_checkpoint_log(struct xfs_mount *mp);
/* Are we set up for a cross-referencing check? */
bool xfs_scrub_should_check_xref(struct xfs_scrub_context *sc, int *error,
bool xchk_should_check_xref(struct xfs_scrub *sc, int *error,
struct xfs_btree_cur **curpp);
/* Setup functions */
int xfs_scrub_setup_fs(struct xfs_scrub_context *sc, struct xfs_inode *ip);
int xfs_scrub_setup_ag_allocbt(struct xfs_scrub_context *sc,
int xchk_setup_fs(struct xfs_scrub *sc, struct xfs_inode *ip);
int xchk_setup_ag_allocbt(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_ag_iallocbt(struct xfs_scrub_context *sc,
int xchk_setup_ag_iallocbt(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_ag_rmapbt(struct xfs_scrub_context *sc,
int xchk_setup_ag_rmapbt(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_ag_refcountbt(struct xfs_scrub_context *sc,
int xchk_setup_ag_refcountbt(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_inode(struct xfs_scrub_context *sc,
int xchk_setup_inode(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_inode_bmap(struct xfs_scrub_context *sc,
int xchk_setup_inode_bmap(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_inode_bmap_data(struct xfs_scrub_context *sc,
int xchk_setup_inode_bmap_data(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_directory(struct xfs_scrub_context *sc,
int xchk_setup_directory(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_xattr(struct xfs_scrub_context *sc,
int xchk_setup_xattr(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_symlink(struct xfs_scrub_context *sc,
int xchk_setup_symlink(struct xfs_scrub *sc,
struct xfs_inode *ip);
int xfs_scrub_setup_parent(struct xfs_scrub_context *sc,
int xchk_setup_parent(struct xfs_scrub *sc,
struct xfs_inode *ip);
#ifdef CONFIG_XFS_RT
int xfs_scrub_setup_rt(struct xfs_scrub_context *sc, struct xfs_inode *ip);
int xchk_setup_rt(struct xfs_scrub *sc, struct xfs_inode *ip);
#else
static inline int
xfs_scrub_setup_rt(struct xfs_scrub_context *sc, struct xfs_inode *ip)
xchk_setup_rt(struct xfs_scrub *sc, struct xfs_inode *ip)
{
return -ENOENT;
}
#endif
#ifdef CONFIG_XFS_QUOTA
int xfs_scrub_setup_quota(struct xfs_scrub_context *sc, struct xfs_inode *ip);
int xchk_setup_quota(struct xfs_scrub *sc, struct xfs_inode *ip);
#else
static inline int
xfs_scrub_setup_quota(struct xfs_scrub_context *sc, struct xfs_inode *ip)
xchk_setup_quota(struct xfs_scrub *sc, struct xfs_inode *ip)
{
return -ENOENT;
}
#endif
void xfs_scrub_ag_free(struct xfs_scrub_context *sc, struct xfs_scrub_ag *sa);
int xfs_scrub_ag_init(struct xfs_scrub_context *sc, xfs_agnumber_t agno,
struct xfs_scrub_ag *sa);
void xfs_scrub_perag_get(struct xfs_mount *mp, struct xfs_scrub_ag *sa);
int xfs_scrub_ag_read_headers(struct xfs_scrub_context *sc, xfs_agnumber_t agno,
struct xfs_buf **agi, struct xfs_buf **agf,
struct xfs_buf **agfl);
void xfs_scrub_ag_btcur_free(struct xfs_scrub_ag *sa);
int xfs_scrub_ag_btcur_init(struct xfs_scrub_context *sc,
struct xfs_scrub_ag *sa);
int xfs_scrub_count_rmap_ownedby_ag(struct xfs_scrub_context *sc,
struct xfs_btree_cur *cur,
struct xfs_owner_info *oinfo,
xfs_filblks_t *blocks);
void xchk_ag_free(struct xfs_scrub *sc, struct xchk_ag *sa);
int xchk_ag_init(struct xfs_scrub *sc, xfs_agnumber_t agno,
struct xchk_ag *sa);
void xchk_perag_get(struct xfs_mount *mp, struct xchk_ag *sa);
int xchk_ag_read_headers(struct xfs_scrub *sc, xfs_agnumber_t agno,
struct xfs_buf **agi, struct xfs_buf **agf,
struct xfs_buf **agfl);
void xchk_ag_btcur_free(struct xchk_ag *sa);
int xchk_ag_btcur_init(struct xfs_scrub *sc, struct xchk_ag *sa);
int xchk_count_rmap_ownedby_ag(struct xfs_scrub *sc, struct xfs_btree_cur *cur,
struct xfs_owner_info *oinfo, xfs_filblks_t *blocks);
int xfs_scrub_setup_ag_btree(struct xfs_scrub_context *sc,
struct xfs_inode *ip, bool force_log);
int xfs_scrub_get_inode(struct xfs_scrub_context *sc, struct xfs_inode *ip_in);
int xfs_scrub_setup_inode_contents(struct xfs_scrub_context *sc,
struct xfs_inode *ip, unsigned int resblks);
void xfs_scrub_buffer_recheck(struct xfs_scrub_context *sc, struct xfs_buf *bp);
int xchk_setup_ag_btree(struct xfs_scrub *sc, struct xfs_inode *ip,
bool force_log);
int xchk_get_inode(struct xfs_scrub *sc, struct xfs_inode *ip_in);
int xchk_setup_inode_contents(struct xfs_scrub *sc, struct xfs_inode *ip,
unsigned int resblks);
void xchk_buffer_recheck(struct xfs_scrub *sc, struct xfs_buf *bp);
/*
* Don't bother cross-referencing if we already found corruption or cross
* referencing discrepancies.
*/
static inline bool xfs_scrub_skip_xref(struct xfs_scrub_metadata *sm)
static inline bool xchk_skip_xref(struct xfs_scrub_metadata *sm)
{
return sm->sm_flags & (XFS_SCRUB_OFLAG_CORRUPT |
XFS_SCRUB_OFLAG_XCORRUPT);
}
int xfs_scrub_metadata_inode_forks(struct xfs_scrub_context *sc);
int xfs_scrub_ilock_inverted(struct xfs_inode *ip, uint lock_mode);
int xchk_metadata_inode_forks(struct xfs_scrub *sc);
int xchk_ilock_inverted(struct xfs_inode *ip, uint lock_mode);
#endif /* __XFS_SCRUB_COMMON_H__ */

Просмотреть файл

@ -35,12 +35,12 @@
* operational errors in common.c.
*/
bool
xfs_scrub_da_process_error(
struct xfs_scrub_da_btree *ds,
int level,
int *error)
xchk_da_process_error(
struct xchk_da_btree *ds,
int level,
int *error)
{
struct xfs_scrub_context *sc = ds->sc;
struct xfs_scrub *sc = ds->sc;
if (*error == 0)
return true;
@ -48,7 +48,7 @@ xfs_scrub_da_process_error(
switch (*error) {
case -EDEADLOCK:
/* Used to restart an op with deadlock avoidance. */
trace_xfs_scrub_deadlock_retry(sc->ip, sc->sm, *error);
trace_xchk_deadlock_retry(sc->ip, sc->sm, *error);
break;
case -EFSBADCRC:
case -EFSCORRUPTED:
@ -57,7 +57,7 @@ xfs_scrub_da_process_error(
*error = 0;
/* fall through */
default:
trace_xfs_scrub_file_op_error(sc, ds->dargs.whichfork,
trace_xchk_file_op_error(sc, ds->dargs.whichfork,
xfs_dir2_da_to_db(ds->dargs.geo,
ds->state->path.blk[level].blkno),
*error, __return_address);
@ -71,15 +71,15 @@ xfs_scrub_da_process_error(
* operational errors in common.c.
*/
void
xfs_scrub_da_set_corrupt(
struct xfs_scrub_da_btree *ds,
int level)
xchk_da_set_corrupt(
struct xchk_da_btree *ds,
int level)
{
struct xfs_scrub_context *sc = ds->sc;
struct xfs_scrub *sc = ds->sc;
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
trace_xfs_scrub_fblock_error(sc, ds->dargs.whichfork,
trace_xchk_fblock_error(sc, ds->dargs.whichfork,
xfs_dir2_da_to_db(ds->dargs.geo,
ds->state->path.blk[level].blkno),
__return_address);
@ -87,14 +87,14 @@ xfs_scrub_da_set_corrupt(
/* Find an entry at a certain level in a da btree. */
STATIC void *
xfs_scrub_da_btree_entry(
struct xfs_scrub_da_btree *ds,
int level,
int rec)
xchk_da_btree_entry(
struct xchk_da_btree *ds,
int level,
int rec)
{
char *ents;
struct xfs_da_state_blk *blk;
void *baddr;
char *ents;
struct xfs_da_state_blk *blk;
void *baddr;
/* Dispatch the entry finding function. */
blk = &ds->state->path.blk[level];
@ -123,8 +123,8 @@ xfs_scrub_da_btree_entry(
/* Scrub a da btree hash (key). */
int
xfs_scrub_da_btree_hash(
struct xfs_scrub_da_btree *ds,
xchk_da_btree_hash(
struct xchk_da_btree *ds,
int level,
__be32 *hashp)
{
@ -136,7 +136,7 @@ xfs_scrub_da_btree_hash(
/* Is this hash in order? */
hash = be32_to_cpu(*hashp);
if (hash < ds->hashes[level])
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
ds->hashes[level] = hash;
if (level == 0)
@ -144,10 +144,10 @@ xfs_scrub_da_btree_hash(
/* Is this hash no larger than the parent hash? */
blks = ds->state->path.blk;
entry = xfs_scrub_da_btree_entry(ds, level - 1, blks[level - 1].index);
entry = xchk_da_btree_entry(ds, level - 1, blks[level - 1].index);
parent_hash = be32_to_cpu(entry->hashval);
if (parent_hash < hash)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
return 0;
}
@ -157,13 +157,13 @@ xfs_scrub_da_btree_hash(
* pointer.
*/
STATIC bool
xfs_scrub_da_btree_ptr_ok(
struct xfs_scrub_da_btree *ds,
int level,
xfs_dablk_t blkno)
xchk_da_btree_ptr_ok(
struct xchk_da_btree *ds,
int level,
xfs_dablk_t blkno)
{
if (blkno < ds->lowest || (ds->highest != 0 && blkno >= ds->highest)) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
return false;
}
@ -176,7 +176,7 @@ xfs_scrub_da_btree_ptr_ok(
* leaf1, we must multiplex the verifiers.
*/
static void
xfs_scrub_da_btree_read_verify(
xchk_da_btree_read_verify(
struct xfs_buf *bp)
{
struct xfs_da_blkinfo *info = bp->b_addr;
@ -198,7 +198,7 @@ xfs_scrub_da_btree_read_verify(
}
}
static void
xfs_scrub_da_btree_write_verify(
xchk_da_btree_write_verify(
struct xfs_buf *bp)
{
struct xfs_da_blkinfo *info = bp->b_addr;
@ -220,7 +220,7 @@ xfs_scrub_da_btree_write_verify(
}
}
static void *
xfs_scrub_da_btree_verify(
xchk_da_btree_verify(
struct xfs_buf *bp)
{
struct xfs_da_blkinfo *info = bp->b_addr;
@ -236,23 +236,23 @@ xfs_scrub_da_btree_verify(
}
}
static const struct xfs_buf_ops xfs_scrub_da_btree_buf_ops = {
.name = "xfs_scrub_da_btree",
.verify_read = xfs_scrub_da_btree_read_verify,
.verify_write = xfs_scrub_da_btree_write_verify,
.verify_struct = xfs_scrub_da_btree_verify,
static const struct xfs_buf_ops xchk_da_btree_buf_ops = {
.name = "xchk_da_btree",
.verify_read = xchk_da_btree_read_verify,
.verify_write = xchk_da_btree_write_verify,
.verify_struct = xchk_da_btree_verify,
};
/* Check a block's sibling. */
STATIC int
xfs_scrub_da_btree_block_check_sibling(
struct xfs_scrub_da_btree *ds,
int level,
int direction,
xfs_dablk_t sibling)
xchk_da_btree_block_check_sibling(
struct xchk_da_btree *ds,
int level,
int direction,
xfs_dablk_t sibling)
{
int retval;
int error;
int retval;
int error;
memcpy(&ds->state->altpath, &ds->state->path,
sizeof(ds->state->altpath));
@ -265,7 +265,7 @@ xfs_scrub_da_btree_block_check_sibling(
error = xfs_da3_path_shift(ds->state, &ds->state->altpath,
direction, false, &retval);
if (error == 0 && retval == 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
error = 0;
goto out;
}
@ -273,19 +273,19 @@ xfs_scrub_da_btree_block_check_sibling(
/* Move the alternate cursor one block in the direction given. */
error = xfs_da3_path_shift(ds->state, &ds->state->altpath,
direction, false, &retval);
if (!xfs_scrub_da_process_error(ds, level, &error))
if (!xchk_da_process_error(ds, level, &error))
return error;
if (retval) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
return error;
}
if (ds->state->altpath.blk[level].bp)
xfs_scrub_buffer_recheck(ds->sc,
xchk_buffer_recheck(ds->sc,
ds->state->altpath.blk[level].bp);
/* Compare upper level pointer to sibling pointer. */
if (ds->state->altpath.blk[level].blkno != sibling)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
xfs_trans_brelse(ds->dargs.trans, ds->state->altpath.blk[level].bp);
out:
return error;
@ -293,14 +293,14 @@ out:
/* Check a block's sibling pointers. */
STATIC int
xfs_scrub_da_btree_block_check_siblings(
struct xfs_scrub_da_btree *ds,
int level,
struct xfs_da_blkinfo *hdr)
xchk_da_btree_block_check_siblings(
struct xchk_da_btree *ds,
int level,
struct xfs_da_blkinfo *hdr)
{
xfs_dablk_t forw;
xfs_dablk_t back;
int error = 0;
xfs_dablk_t forw;
xfs_dablk_t back;
int error = 0;
forw = be32_to_cpu(hdr->forw);
back = be32_to_cpu(hdr->back);
@ -308,7 +308,7 @@ xfs_scrub_da_btree_block_check_siblings(
/* Top level blocks should not have sibling pointers. */
if (level == 0) {
if (forw != 0 || back != 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
return 0;
}
@ -316,10 +316,10 @@ xfs_scrub_da_btree_block_check_siblings(
* Check back (left) and forw (right) pointers. These functions
* absorb error codes for us.
*/
error = xfs_scrub_da_btree_block_check_sibling(ds, level, 0, back);
error = xchk_da_btree_block_check_sibling(ds, level, 0, back);
if (error)
goto out;
error = xfs_scrub_da_btree_block_check_sibling(ds, level, 1, forw);
error = xchk_da_btree_block_check_sibling(ds, level, 1, forw);
out:
memset(&ds->state->altpath, 0, sizeof(ds->state->altpath));
@ -328,8 +328,8 @@ out:
/* Load a dir/attribute block from a btree. */
STATIC int
xfs_scrub_da_btree_block(
struct xfs_scrub_da_btree *ds,
xchk_da_btree_block(
struct xchk_da_btree *ds,
int level,
xfs_dablk_t blkno)
{
@ -355,17 +355,17 @@ xfs_scrub_da_btree_block(
/* Check the pointer. */
blk->blkno = blkno;
if (!xfs_scrub_da_btree_ptr_ok(ds, level, blkno))
if (!xchk_da_btree_ptr_ok(ds, level, blkno))
goto out_nobuf;
/* Read the buffer. */
error = xfs_da_read_buf(dargs->trans, dargs->dp, blk->blkno, -2,
&blk->bp, dargs->whichfork,
&xfs_scrub_da_btree_buf_ops);
if (!xfs_scrub_da_process_error(ds, level, &error))
&xchk_da_btree_buf_ops);
if (!xchk_da_process_error(ds, level, &error))
goto out_nobuf;
if (blk->bp)
xfs_scrub_buffer_recheck(ds->sc, blk->bp);
xchk_buffer_recheck(ds->sc, blk->bp);
/*
* We didn't find a dir btree root block, which means that
@ -378,7 +378,7 @@ xfs_scrub_da_btree_block(
/* It's /not/ ok for attr trees not to have a da btree. */
if (blk->bp == NULL) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out_nobuf;
}
@ -388,17 +388,17 @@ xfs_scrub_da_btree_block(
/* We only started zeroing the header on v5 filesystems. */
if (xfs_sb_version_hascrc(&ds->sc->mp->m_sb) && hdr3->hdr.pad)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
/* Check the owner. */
if (xfs_sb_version_hascrc(&ip->i_mount->m_sb)) {
owner = be64_to_cpu(hdr3->owner);
if (owner != ip->i_ino)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
}
/* Check the siblings. */
error = xfs_scrub_da_btree_block_check_siblings(ds, level, &hdr3->hdr);
error = xchk_da_btree_block_check_siblings(ds, level, &hdr3->hdr);
if (error)
goto out;
@ -411,7 +411,7 @@ xfs_scrub_da_btree_block(
blk->magic = XFS_ATTR_LEAF_MAGIC;
blk->hashval = xfs_attr_leaf_lasthash(blk->bp, pmaxrecs);
if (ds->tree_level != 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
break;
case XFS_DIR2_LEAFN_MAGIC:
case XFS_DIR3_LEAFN_MAGIC:
@ -420,7 +420,7 @@ xfs_scrub_da_btree_block(
blk->magic = XFS_DIR2_LEAFN_MAGIC;
blk->hashval = xfs_dir2_leaf_lasthash(ip, blk->bp, pmaxrecs);
if (ds->tree_level != 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
break;
case XFS_DIR2_LEAF1_MAGIC:
case XFS_DIR3_LEAF1_MAGIC:
@ -429,7 +429,7 @@ xfs_scrub_da_btree_block(
blk->magic = XFS_DIR2_LEAF1_MAGIC;
blk->hashval = xfs_dir2_leaf_lasthash(ip, blk->bp, pmaxrecs);
if (ds->tree_level != 0)
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
break;
case XFS_DA_NODE_MAGIC:
case XFS_DA3_NODE_MAGIC:
@ -443,13 +443,13 @@ xfs_scrub_da_btree_block(
blk->hashval = be32_to_cpu(btree[*pmaxrecs - 1].hashval);
if (level == 0) {
if (nodehdr.level >= XFS_DA_NODE_MAXDEPTH) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out_freebp;
}
ds->tree_level = nodehdr.level;
} else {
if (ds->tree_level != nodehdr.level) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out_freebp;
}
}
@ -457,7 +457,7 @@ xfs_scrub_da_btree_block(
/* XXX: Check hdr3.pad32 once we know how to fix it. */
break;
default:
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out_freebp;
}
@ -473,13 +473,13 @@ out_nobuf:
/* Visit all nodes and leaves of a da btree. */
int
xfs_scrub_da_btree(
struct xfs_scrub_context *sc,
xchk_da_btree(
struct xfs_scrub *sc,
int whichfork,
xfs_scrub_da_btree_rec_fn scrub_fn,
xchk_da_btree_rec_fn scrub_fn,
void *private)
{
struct xfs_scrub_da_btree ds = {};
struct xchk_da_btree ds = {};
struct xfs_mount *mp = sc->mp;
struct xfs_da_state_blk *blks;
struct xfs_da_node_entry *key;
@ -517,7 +517,7 @@ xfs_scrub_da_btree(
/* Find the root of the da tree, if present. */
blks = ds.state->path.blk;
error = xfs_scrub_da_btree_block(&ds, level, blkno);
error = xchk_da_btree_block(&ds, level, blkno);
if (error)
goto out_state;
/*
@ -542,12 +542,12 @@ xfs_scrub_da_btree(
}
/* Dispatch record scrubbing. */
rec = xfs_scrub_da_btree_entry(&ds, level,
rec = xchk_da_btree_entry(&ds, level,
blks[level].index);
error = scrub_fn(&ds, level, rec);
if (error)
break;
if (xfs_scrub_should_terminate(sc, &error) ||
if (xchk_should_terminate(sc, &error) ||
(sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
break;
@ -566,8 +566,8 @@ xfs_scrub_da_btree(
}
/* Hashes in order for scrub? */
key = xfs_scrub_da_btree_entry(&ds, level, blks[level].index);
error = xfs_scrub_da_btree_hash(&ds, level, &key->hashval);
key = xchk_da_btree_entry(&ds, level, blks[level].index);
error = xchk_da_btree_hash(&ds, level, &key->hashval);
if (error)
goto out;
@ -575,7 +575,7 @@ xfs_scrub_da_btree(
blkno = be32_to_cpu(key->before);
level++;
ds.tree_level--;
error = xfs_scrub_da_btree_block(&ds, level, blkno);
error = xchk_da_btree_block(&ds, level, blkno);
if (error)
goto out;
if (blks[level].bp == NULL)

Просмотреть файл

@ -8,13 +8,13 @@
/* dir/attr btree */
struct xfs_scrub_da_btree {
struct xfs_da_args dargs;
xfs_dahash_t hashes[XFS_DA_NODE_MAXDEPTH];
int maxrecs[XFS_DA_NODE_MAXDEPTH];
struct xfs_da_state *state;
struct xfs_scrub_context *sc;
void *private;
struct xchk_da_btree {
struct xfs_da_args dargs;
xfs_dahash_t hashes[XFS_DA_NODE_MAXDEPTH];
int maxrecs[XFS_DA_NODE_MAXDEPTH];
struct xfs_da_state *state;
struct xfs_scrub *sc;
void *private;
/*
* Lowest and highest directory block address in which we expect
@ -22,24 +22,23 @@ struct xfs_scrub_da_btree {
* (presumably) means between LEAF_OFFSET and FREE_OFFSET; for
* attributes there is no limit.
*/
xfs_dablk_t lowest;
xfs_dablk_t highest;
xfs_dablk_t lowest;
xfs_dablk_t highest;
int tree_level;
int tree_level;
};
typedef int (*xfs_scrub_da_btree_rec_fn)(struct xfs_scrub_da_btree *ds,
typedef int (*xchk_da_btree_rec_fn)(struct xchk_da_btree *ds,
int level, void *rec);
/* Check for da btree operation errors. */
bool xfs_scrub_da_process_error(struct xfs_scrub_da_btree *ds, int level, int *error);
bool xchk_da_process_error(struct xchk_da_btree *ds, int level, int *error);
/* Check for da btree corruption. */
void xfs_scrub_da_set_corrupt(struct xfs_scrub_da_btree *ds, int level);
void xchk_da_set_corrupt(struct xchk_da_btree *ds, int level);
int xfs_scrub_da_btree_hash(struct xfs_scrub_da_btree *ds, int level,
__be32 *hashp);
int xfs_scrub_da_btree(struct xfs_scrub_context *sc, int whichfork,
xfs_scrub_da_btree_rec_fn scrub_fn, void *private);
int xchk_da_btree_hash(struct xchk_da_btree *ds, int level, __be32 *hashp);
int xchk_da_btree(struct xfs_scrub *sc, int whichfork,
xchk_da_btree_rec_fn scrub_fn, void *private);
#endif /* __XFS_SCRUB_DABTREE_H__ */

Просмотреть файл

@ -31,40 +31,40 @@
/* Set us up to scrub directories. */
int
xfs_scrub_setup_directory(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_directory(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
return xfs_scrub_setup_inode_contents(sc, ip, 0);
return xchk_setup_inode_contents(sc, ip, 0);
}
/* Directories */
/* Scrub a directory entry. */
struct xfs_scrub_dir_ctx {
struct xchk_dir_ctx {
/* VFS fill-directory iterator */
struct dir_context dir_iter;
struct dir_context dir_iter;
struct xfs_scrub_context *sc;
struct xfs_scrub *sc;
};
/* Check that an inode's mode matches a given DT_ type. */
STATIC int
xfs_scrub_dir_check_ftype(
struct xfs_scrub_dir_ctx *sdc,
xfs_fileoff_t offset,
xfs_ino_t inum,
int dtype)
xchk_dir_check_ftype(
struct xchk_dir_ctx *sdc,
xfs_fileoff_t offset,
xfs_ino_t inum,
int dtype)
{
struct xfs_mount *mp = sdc->sc->mp;
struct xfs_inode *ip;
int ino_dtype;
int error = 0;
struct xfs_mount *mp = sdc->sc->mp;
struct xfs_inode *ip;
int ino_dtype;
int error = 0;
if (!xfs_sb_version_hasftype(&mp->m_sb)) {
if (dtype != DT_UNKNOWN && dtype != DT_DIR)
xfs_scrub_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
offset);
goto out;
}
@ -78,7 +78,7 @@ xfs_scrub_dir_check_ftype(
* inodes can trigger immediate inactive cleanup of the inode.
*/
error = xfs_iget(mp, sdc->sc->tp, inum, 0, 0, &ip);
if (!xfs_scrub_fblock_xref_process_error(sdc->sc, XFS_DATA_FORK, offset,
if (!xchk_fblock_xref_process_error(sdc->sc, XFS_DATA_FORK, offset,
&error))
goto out;
@ -86,8 +86,8 @@ xfs_scrub_dir_check_ftype(
ino_dtype = xfs_dir3_get_dtype(mp,
xfs_mode_to_ftype(VFS_I(ip)->i_mode));
if (ino_dtype != dtype)
xfs_scrub_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK, offset);
iput(VFS_I(ip));
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK, offset);
xfs_irele(ip);
out:
return error;
}
@ -101,23 +101,23 @@ out:
* we can look up this filename. Finally, we check the ftype.
*/
STATIC int
xfs_scrub_dir_actor(
struct dir_context *dir_iter,
const char *name,
int namelen,
loff_t pos,
u64 ino,
unsigned type)
xchk_dir_actor(
struct dir_context *dir_iter,
const char *name,
int namelen,
loff_t pos,
u64 ino,
unsigned type)
{
struct xfs_mount *mp;
struct xfs_inode *ip;
struct xfs_scrub_dir_ctx *sdc;
struct xfs_name xname;
xfs_ino_t lookup_ino;
xfs_dablk_t offset;
int error = 0;
struct xfs_mount *mp;
struct xfs_inode *ip;
struct xchk_dir_ctx *sdc;
struct xfs_name xname;
xfs_ino_t lookup_ino;
xfs_dablk_t offset;
int error = 0;
sdc = container_of(dir_iter, struct xfs_scrub_dir_ctx, dir_iter);
sdc = container_of(dir_iter, struct xchk_dir_ctx, dir_iter);
ip = sdc->sc->ip;
mp = ip->i_mount;
offset = xfs_dir2_db_to_da(mp->m_dir_geo,
@ -125,17 +125,17 @@ xfs_scrub_dir_actor(
/* Does this inode number make sense? */
if (!xfs_verify_dir_ino(mp, ino)) {
xfs_scrub_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK, offset);
goto out;
}
if (!strncmp(".", name, namelen)) {
/* If this is "." then check that the inum matches the dir. */
if (xfs_sb_version_hasftype(&mp->m_sb) && type != DT_DIR)
xfs_scrub_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
offset);
if (ino != ip->i_ino)
xfs_scrub_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
offset);
} else if (!strncmp("..", name, namelen)) {
/*
@ -143,10 +143,10 @@ xfs_scrub_dir_actor(
* matches this dir.
*/
if (xfs_sb_version_hasftype(&mp->m_sb) && type != DT_DIR)
xfs_scrub_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
offset);
if (ip->i_ino == mp->m_sb.sb_rootino && ino != ip->i_ino)
xfs_scrub_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK,
offset);
}
@ -156,23 +156,23 @@ xfs_scrub_dir_actor(
xname.type = XFS_DIR3_FT_UNKNOWN;
error = xfs_dir_lookup(sdc->sc->tp, ip, &xname, &lookup_ino, NULL);
if (!xfs_scrub_fblock_process_error(sdc->sc, XFS_DATA_FORK, offset,
if (!xchk_fblock_process_error(sdc->sc, XFS_DATA_FORK, offset,
&error))
goto out;
if (lookup_ino != ino) {
xfs_scrub_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sdc->sc, XFS_DATA_FORK, offset);
goto out;
}
/* Verify the file type. This function absorbs error codes. */
error = xfs_scrub_dir_check_ftype(sdc, offset, lookup_ino, type);
error = xchk_dir_check_ftype(sdc, offset, lookup_ino, type);
if (error)
goto out;
out:
/*
* A negative error code returned here is supposed to cause the
* dir_emit caller (xfs_readdir) to abort the directory iteration
* and return zero to xfs_scrub_directory.
* and return zero to xchk_directory.
*/
if (error == 0 && sdc->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return -EFSCORRUPTED;
@ -181,8 +181,8 @@ out:
/* Scrub a directory btree record. */
STATIC int
xfs_scrub_dir_rec(
struct xfs_scrub_da_btree *ds,
xchk_dir_rec(
struct xchk_da_btree *ds,
int level,
void *rec)
{
@ -203,7 +203,7 @@ xfs_scrub_dir_rec(
int error;
/* Check the hash of the entry. */
error = xfs_scrub_da_btree_hash(ds, level, &ent->hashval);
error = xchk_da_btree_hash(ds, level, &ent->hashval);
if (error)
goto out;
@ -218,18 +218,18 @@ xfs_scrub_dir_rec(
rec_bno = xfs_dir2_db_to_da(mp->m_dir_geo, db);
if (rec_bno >= mp->m_dir_geo->leafblk) {
xfs_scrub_da_set_corrupt(ds, level);
xchk_da_set_corrupt(ds, level);
goto out;
}
error = xfs_dir3_data_read(ds->dargs.trans, dp, rec_bno, -2, &bp);
if (!xfs_scrub_fblock_process_error(ds->sc, XFS_DATA_FORK, rec_bno,
if (!xchk_fblock_process_error(ds->sc, XFS_DATA_FORK, rec_bno,
&error))
goto out;
if (!bp) {
xfs_scrub_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
goto out;
}
xfs_scrub_buffer_recheck(ds->sc, bp);
xchk_buffer_recheck(ds->sc, bp);
if (ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out_relse;
@ -240,7 +240,7 @@ xfs_scrub_dir_rec(
p = (char *)mp->m_dir_inode_ops->data_entry_p(bp->b_addr);
endp = xfs_dir3_data_endp(mp->m_dir_geo, bp->b_addr);
if (!endp) {
xfs_scrub_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
goto out_relse;
}
while (p < endp) {
@ -258,7 +258,7 @@ xfs_scrub_dir_rec(
p += mp->m_dir_inode_ops->data_entsize(dep->namelen);
}
if (p >= endp) {
xfs_scrub_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
goto out_relse;
}
@ -267,14 +267,14 @@ xfs_scrub_dir_rec(
hash = be32_to_cpu(ent->hashval);
tag = be16_to_cpup(dp->d_ops->data_entry_tag_p(dent));
if (!xfs_verify_dir_ino(mp, ino) || tag != off)
xfs_scrub_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
if (dent->namelen == 0) {
xfs_scrub_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
goto out_relse;
}
calc_hash = xfs_da_hashname(dent->name, dent->namelen);
if (calc_hash != hash)
xfs_scrub_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
xchk_fblock_set_corrupt(ds->sc, XFS_DATA_FORK, rec_bno);
out_relse:
xfs_trans_brelse(ds->dargs.trans, bp);
@ -288,8 +288,8 @@ out:
* shortest, and that there aren't any bogus entries.
*/
STATIC void
xfs_scrub_directory_check_free_entry(
struct xfs_scrub_context *sc,
xchk_directory_check_free_entry(
struct xfs_scrub *sc,
xfs_dablk_t lblk,
struct xfs_dir2_data_free *bf,
struct xfs_dir2_data_unused *dup)
@ -308,13 +308,13 @@ xfs_scrub_directory_check_free_entry(
return;
/* Unused entry should be in the bestfrees but wasn't found. */
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
}
/* Check free space info in a directory data block. */
STATIC int
xfs_scrub_directory_data_bestfree(
struct xfs_scrub_context *sc,
xchk_directory_data_bestfree(
struct xfs_scrub *sc,
xfs_dablk_t lblk,
bool is_block)
{
@ -339,15 +339,15 @@ xfs_scrub_directory_data_bestfree(
if (is_block) {
/* dir block format */
if (lblk != XFS_B_TO_FSBT(mp, XFS_DIR2_DATA_OFFSET))
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
error = xfs_dir3_block_read(sc->tp, sc->ip, &bp);
} else {
/* dir data format */
error = xfs_dir3_data_read(sc->tp, sc->ip, lblk, -1, &bp);
}
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
goto out;
xfs_scrub_buffer_recheck(sc, bp);
xchk_buffer_recheck(sc, bp);
/* XXX: Check xfs_dir3_data_hdr.pad is zero once we start setting it. */
@ -362,7 +362,7 @@ xfs_scrub_directory_data_bestfree(
if (offset == 0)
continue;
if (offset >= mp->m_dir_geo->blksize) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf;
}
dup = (struct xfs_dir2_data_unused *)(bp->b_addr + offset);
@ -372,13 +372,13 @@ xfs_scrub_directory_data_bestfree(
if (dup->freetag != cpu_to_be16(XFS_DIR2_DATA_FREE_TAG) ||
be16_to_cpu(dup->length) != be16_to_cpu(dfp->length) ||
tag != ((char *)dup - (char *)bp->b_addr)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf;
}
/* bestfree records should be ordered largest to smallest */
if (smallest_bestfree < be16_to_cpu(dfp->length)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf;
}
@ -400,7 +400,7 @@ xfs_scrub_directory_data_bestfree(
dep = (struct xfs_dir2_data_entry *)ptr;
newlen = d_ops->data_entsize(dep->namelen);
if (newlen <= 0) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK,
lblk);
goto out_buf;
}
@ -411,7 +411,7 @@ xfs_scrub_directory_data_bestfree(
/* Spot check this free entry */
tag = be16_to_cpu(*xfs_dir2_data_unused_tag_p(dup));
if (tag != ((char *)dup - (char *)bp->b_addr)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf;
}
@ -419,14 +419,14 @@ xfs_scrub_directory_data_bestfree(
* Either this entry is a bestfree or it's smaller than
* any of the bestfrees.
*/
xfs_scrub_directory_check_free_entry(sc, lblk, bf, dup);
xchk_directory_check_free_entry(sc, lblk, bf, dup);
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out_buf;
/* Move on. */
newlen = be16_to_cpu(dup->length);
if (newlen <= 0) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out_buf;
}
ptr += newlen;
@ -436,11 +436,11 @@ xfs_scrub_directory_data_bestfree(
/* We're required to fill all the space. */
if (ptr != endptr)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
/* Did we see at least as many free slots as there are bestfrees? */
if (nr_frees < nr_bestfrees)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
out_buf:
xfs_trans_brelse(sc->tp, bp);
out:
@ -454,8 +454,8 @@ out:
* array is in order.
*/
STATIC void
xfs_scrub_directory_check_freesp(
struct xfs_scrub_context *sc,
xchk_directory_check_freesp(
struct xfs_scrub *sc,
xfs_dablk_t lblk,
struct xfs_buf *dbp,
unsigned int len)
@ -465,16 +465,16 @@ xfs_scrub_directory_check_freesp(
dfp = sc->ip->d_ops->data_bestfree_p(dbp->b_addr);
if (len != be16_to_cpu(dfp->length))
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
if (len > 0 && be16_to_cpu(dfp->offset) == 0)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
}
/* Check free space info in a directory leaf1 block. */
STATIC int
xfs_scrub_directory_leaf1_bestfree(
struct xfs_scrub_context *sc,
xchk_directory_leaf1_bestfree(
struct xfs_scrub *sc,
struct xfs_da_args *args,
xfs_dablk_t lblk)
{
@ -497,9 +497,9 @@ xfs_scrub_directory_leaf1_bestfree(
/* Read the free space block. */
error = xfs_dir3_leaf_read(sc->tp, sc->ip, lblk, -1, &bp);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
goto out;
xfs_scrub_buffer_recheck(sc, bp);
xchk_buffer_recheck(sc, bp);
leaf = bp->b_addr;
d_ops->leaf_hdr_from_disk(&leafhdr, leaf);
@ -512,7 +512,7 @@ xfs_scrub_directory_leaf1_bestfree(
struct xfs_dir3_leaf_hdr *hdr3 = bp->b_addr;
if (hdr3->pad != cpu_to_be32(0))
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
}
/*
@ -520,19 +520,19 @@ xfs_scrub_directory_leaf1_bestfree(
* blocks that can fit under i_size.
*/
if (bestcount != xfs_dir2_byte_to_db(geo, sc->ip->i_d.di_size)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out;
}
/* Is the leaf count even remotely sane? */
if (leafhdr.count > d_ops->leaf_max_ents(geo)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out;
}
/* Leaves and bests don't overlap in leaf format. */
if ((char *)&ents[leafhdr.count] > (char *)bestp) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out;
}
@ -540,13 +540,13 @@ xfs_scrub_directory_leaf1_bestfree(
for (i = 0; i < leafhdr.count; i++) {
hash = be32_to_cpu(ents[i].hashval);
if (i > 0 && lasthash > hash)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
lasthash = hash;
if (ents[i].address == cpu_to_be32(XFS_DIR2_NULL_DATAPTR))
stale++;
}
if (leafhdr.stale != stale)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out;
@ -557,10 +557,10 @@ xfs_scrub_directory_leaf1_bestfree(
continue;
error = xfs_dir3_data_read(sc->tp, sc->ip,
i * args->geo->fsbcount, -1, &dbp);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, lblk,
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk,
&error))
break;
xfs_scrub_directory_check_freesp(sc, lblk, dbp, best);
xchk_directory_check_freesp(sc, lblk, dbp, best);
xfs_trans_brelse(sc->tp, dbp);
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out;
@ -571,8 +571,8 @@ out:
/* Check free space info in a directory freespace block. */
STATIC int
xfs_scrub_directory_free_bestfree(
struct xfs_scrub_context *sc,
xchk_directory_free_bestfree(
struct xfs_scrub *sc,
struct xfs_da_args *args,
xfs_dablk_t lblk)
{
@ -587,15 +587,15 @@ xfs_scrub_directory_free_bestfree(
/* Read the free space block */
error = xfs_dir2_free_read(sc->tp, sc->ip, lblk, &bp);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
goto out;
xfs_scrub_buffer_recheck(sc, bp);
xchk_buffer_recheck(sc, bp);
if (xfs_sb_version_hascrc(&sc->mp->m_sb)) {
struct xfs_dir3_free_hdr *hdr3 = bp->b_addr;
if (hdr3->pad != cpu_to_be32(0))
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
}
/* Check all the entries. */
@ -610,36 +610,36 @@ xfs_scrub_directory_free_bestfree(
error = xfs_dir3_data_read(sc->tp, sc->ip,
(freehdr.firstdb + i) * args->geo->fsbcount,
-1, &dbp);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, lblk,
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk,
&error))
break;
xfs_scrub_directory_check_freesp(sc, lblk, dbp, best);
xchk_directory_check_freesp(sc, lblk, dbp, best);
xfs_trans_brelse(sc->tp, dbp);
}
if (freehdr.nused + stale != freehdr.nvalid)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
out:
return error;
}
/* Check free space information in directories. */
STATIC int
xfs_scrub_directory_blocks(
struct xfs_scrub_context *sc)
xchk_directory_blocks(
struct xfs_scrub *sc)
{
struct xfs_bmbt_irec got;
struct xfs_da_args args;
struct xfs_ifork *ifp;
struct xfs_mount *mp = sc->mp;
xfs_fileoff_t leaf_lblk;
xfs_fileoff_t free_lblk;
xfs_fileoff_t lblk;
struct xfs_iext_cursor icur;
xfs_dablk_t dabno;
bool found;
int is_block = 0;
int error;
struct xfs_bmbt_irec got;
struct xfs_da_args args;
struct xfs_ifork *ifp;
struct xfs_mount *mp = sc->mp;
xfs_fileoff_t leaf_lblk;
xfs_fileoff_t free_lblk;
xfs_fileoff_t lblk;
struct xfs_iext_cursor icur;
xfs_dablk_t dabno;
bool found;
int is_block = 0;
int error;
/* Ignore local format directories. */
if (sc->ip->i_d.di_format != XFS_DINODE_FMT_EXTENTS &&
@ -656,7 +656,7 @@ xfs_scrub_directory_blocks(
args.geo = mp->m_dir_geo;
args.trans = sc->tp;
error = xfs_dir2_isblock(&args, &is_block);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, &error))
goto out;
/* Iterate all the data extents in the directory... */
@ -666,7 +666,7 @@ xfs_scrub_directory_blocks(
if (is_block &&
(got.br_startoff > 0 ||
got.br_blockcount != args.geo->fsbcount)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK,
got.br_startoff);
break;
}
@ -690,7 +690,7 @@ xfs_scrub_directory_blocks(
args.geo->fsbcount);
lblk < got.br_startoff + got.br_blockcount;
lblk += args.geo->fsbcount) {
error = xfs_scrub_directory_data_bestfree(sc, lblk,
error = xchk_directory_data_bestfree(sc, lblk,
is_block);
if (error)
goto out;
@ -709,10 +709,10 @@ xfs_scrub_directory_blocks(
got.br_blockcount == args.geo->fsbcount &&
!xfs_iext_next_extent(ifp, &icur, &got)) {
if (is_block) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out;
}
error = xfs_scrub_directory_leaf1_bestfree(sc, &args,
error = xchk_directory_leaf1_bestfree(sc, &args,
leaf_lblk);
if (error)
goto out;
@ -731,11 +731,11 @@ xfs_scrub_directory_blocks(
*/
lblk = got.br_startoff;
if (lblk & ~0xFFFFFFFFULL) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out;
}
if (is_block) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);
goto out;
}
@ -754,7 +754,7 @@ xfs_scrub_directory_blocks(
args.geo->fsbcount);
lblk < got.br_startoff + got.br_blockcount;
lblk += args.geo->fsbcount) {
error = xfs_scrub_directory_free_bestfree(sc, &args,
error = xchk_directory_free_bestfree(sc, &args,
lblk);
if (error)
goto out;
@ -769,29 +769,29 @@ out:
/* Scrub a whole directory. */
int
xfs_scrub_directory(
struct xfs_scrub_context *sc)
xchk_directory(
struct xfs_scrub *sc)
{
struct xfs_scrub_dir_ctx sdc = {
.dir_iter.actor = xfs_scrub_dir_actor,
struct xchk_dir_ctx sdc = {
.dir_iter.actor = xchk_dir_actor,
.dir_iter.pos = 0,
.sc = sc,
};
size_t bufsize;
loff_t oldpos;
int error = 0;
size_t bufsize;
loff_t oldpos;
int error = 0;
if (!S_ISDIR(VFS_I(sc->ip)->i_mode))
return -ENOENT;
/* Plausible size? */
if (sc->ip->i_d.di_size < xfs_dir2_sf_hdr_size(0)) {
xfs_scrub_ino_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_set_corrupt(sc, sc->ip->i_ino);
goto out;
}
/* Check directory tree structure */
error = xfs_scrub_da_btree(sc, XFS_DATA_FORK, xfs_scrub_dir_rec, NULL);
error = xchk_da_btree(sc, XFS_DATA_FORK, xchk_dir_rec, NULL);
if (error)
return error;
@ -799,7 +799,7 @@ xfs_scrub_directory(
return error;
/* Check the freespace. */
error = xfs_scrub_directory_blocks(sc);
error = xchk_directory_blocks(sc);
if (error)
return error;
@ -816,7 +816,7 @@ xfs_scrub_directory(
/*
* Look up every name in this directory by hash.
*
* Use the xfs_readdir function to call xfs_scrub_dir_actor on
* Use the xfs_readdir function to call xchk_dir_actor on
* every directory entry in this directory. In _actor, we check
* the name, inode number, and ftype (if applicable) of the
* entry. xfs_readdir uses the VFS filldir functions to provide
@ -834,7 +834,7 @@ xfs_scrub_directory(
xfs_iunlock(sc->ip, XFS_ILOCK_EXCL);
while (true) {
error = xfs_readdir(sc->tp, sc->ip, &sdc.dir_iter, bufsize);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, 0,
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, 0,
&error))
goto out;
if (oldpos == sdc.dir_iter.pos)

Просмотреть файл

@ -35,11 +35,11 @@
* try again after forcing logged inode cores out to disk.
*/
int
xfs_scrub_setup_ag_iallocbt(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_ag_iallocbt(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
return xfs_scrub_setup_ag_btree(sc, ip, sc->try_harder);
return xchk_setup_ag_btree(sc, ip, sc->try_harder);
}
/* Inode btree scrubber. */
@ -50,8 +50,8 @@ xfs_scrub_setup_ag_iallocbt(
* we have a record or not depending on freecount.
*/
static inline void
xfs_scrub_iallocbt_chunk_xref_other(
struct xfs_scrub_context *sc,
xchk_iallocbt_chunk_xref_other(
struct xfs_scrub *sc,
struct xfs_inobt_rec_incore *irec,
xfs_agino_t agino)
{
@ -66,17 +66,17 @@ xfs_scrub_iallocbt_chunk_xref_other(
if (!(*pcur))
return;
error = xfs_ialloc_has_inode_record(*pcur, agino, agino, &has_irec);
if (!xfs_scrub_should_check_xref(sc, &error, pcur))
if (!xchk_should_check_xref(sc, &error, pcur))
return;
if (((irec->ir_freecount > 0 && !has_irec) ||
(irec->ir_freecount == 0 && has_irec)))
xfs_scrub_btree_xref_set_corrupt(sc, *pcur, 0);
xchk_btree_xref_set_corrupt(sc, *pcur, 0);
}
/* Cross-reference with the other btrees. */
STATIC void
xfs_scrub_iallocbt_chunk_xref(
struct xfs_scrub_context *sc,
xchk_iallocbt_chunk_xref(
struct xfs_scrub *sc,
struct xfs_inobt_rec_incore *irec,
xfs_agino_t agino,
xfs_agblock_t agbno,
@ -87,17 +87,17 @@ xfs_scrub_iallocbt_chunk_xref(
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return;
xfs_scrub_xref_is_used_space(sc, agbno, len);
xfs_scrub_iallocbt_chunk_xref_other(sc, irec, agino);
xchk_xref_is_used_space(sc, agbno, len);
xchk_iallocbt_chunk_xref_other(sc, irec, agino);
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES);
xfs_scrub_xref_is_owned_by(sc, agbno, len, &oinfo);
xfs_scrub_xref_is_not_shared(sc, agbno, len);
xchk_xref_is_owned_by(sc, agbno, len, &oinfo);
xchk_xref_is_not_shared(sc, agbno, len);
}
/* Is this chunk worth checking? */
STATIC bool
xfs_scrub_iallocbt_chunk(
struct xfs_scrub_btree *bs,
xchk_iallocbt_chunk(
struct xchk_btree *bs,
struct xfs_inobt_rec_incore *irec,
xfs_agino_t agino,
xfs_extlen_t len)
@ -110,16 +110,16 @@ xfs_scrub_iallocbt_chunk(
if (bno + len <= bno ||
!xfs_verify_agbno(mp, agno, bno) ||
!xfs_verify_agbno(mp, agno, bno + len - 1))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
xfs_scrub_iallocbt_chunk_xref(bs->sc, irec, agino, bno, len);
xchk_iallocbt_chunk_xref(bs->sc, irec, agino, bno, len);
return true;
}
/* Count the number of free inodes. */
static unsigned int
xfs_scrub_iallocbt_freecount(
xchk_iallocbt_freecount(
xfs_inofree_t freemask)
{
BUILD_BUG_ON(sizeof(freemask) != sizeof(__u64));
@ -128,8 +128,8 @@ xfs_scrub_iallocbt_freecount(
/* Check a particular inode with ir_free. */
STATIC int
xfs_scrub_iallocbt_check_cluster_freemask(
struct xfs_scrub_btree *bs,
xchk_iallocbt_check_cluster_freemask(
struct xchk_btree *bs,
xfs_ino_t fsino,
xfs_agino_t chunkino,
xfs_agino_t clusterino,
@ -143,14 +143,14 @@ xfs_scrub_iallocbt_check_cluster_freemask(
bool inuse;
int error = 0;
if (xfs_scrub_should_terminate(bs->sc, &error))
if (xchk_should_terminate(bs->sc, &error))
return error;
dip = xfs_buf_offset(bp, clusterino * mp->m_sb.sb_inodesize);
if (be16_to_cpu(dip->di_magic) != XFS_DINODE_MAGIC ||
(dip->di_version >= 3 &&
be64_to_cpu(dip->di_ino) != fsino + clusterino)) {
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
goto out;
}
@ -175,15 +175,15 @@ xfs_scrub_iallocbt_check_cluster_freemask(
freemask_ok = inode_is_free ^ inuse;
}
if (!freemask_ok)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
out:
return 0;
}
/* Make sure the free mask is consistent with what the inodes think. */
STATIC int
xfs_scrub_iallocbt_check_freemask(
struct xfs_scrub_btree *bs,
xchk_iallocbt_check_freemask(
struct xchk_btree *bs,
struct xfs_inobt_rec_incore *irec)
{
struct xfs_owner_info oinfo;
@ -223,18 +223,18 @@ xfs_scrub_iallocbt_check_freemask(
/* The whole cluster must be a hole or not a hole. */
ir_holemask = (irec->ir_holemask & holemask);
if (ir_holemask != holemask && ir_holemask != 0) {
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
continue;
}
/* If any part of this is a hole, skip it. */
if (ir_holemask) {
xfs_scrub_xref_is_not_owned_by(bs->sc, agbno,
xchk_xref_is_not_owned_by(bs->sc, agbno,
blks_per_cluster, &oinfo);
continue;
}
xfs_scrub_xref_is_owned_by(bs->sc, agbno, blks_per_cluster,
xchk_xref_is_owned_by(bs->sc, agbno, blks_per_cluster,
&oinfo);
/* Grab the inode cluster buffer. */
@ -245,13 +245,13 @@ xfs_scrub_iallocbt_check_freemask(
error = xfs_imap_to_bp(mp, bs->cur->bc_tp, &imap,
&dip, &bp, 0, 0);
if (!xfs_scrub_btree_xref_process_error(bs->sc, bs->cur, 0,
if (!xchk_btree_xref_process_error(bs->sc, bs->cur, 0,
&error))
continue;
/* Which inodes are free? */
for (clusterino = 0; clusterino < nr_inodes; clusterino++) {
error = xfs_scrub_iallocbt_check_cluster_freemask(bs,
error = xchk_iallocbt_check_cluster_freemask(bs,
fsino, chunkino, clusterino, irec, bp);
if (error) {
xfs_trans_brelse(bs->cur->bc_tp, bp);
@ -267,8 +267,8 @@ xfs_scrub_iallocbt_check_freemask(
/* Scrub an inobt/finobt record. */
STATIC int
xfs_scrub_iallocbt_rec(
struct xfs_scrub_btree *bs,
xchk_iallocbt_rec(
struct xchk_btree *bs,
union xfs_btree_rec *rec)
{
struct xfs_mount *mp = bs->cur->bc_mp;
@ -289,18 +289,18 @@ xfs_scrub_iallocbt_rec(
if (irec.ir_count > XFS_INODES_PER_CHUNK ||
irec.ir_freecount > XFS_INODES_PER_CHUNK)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
real_freecount = irec.ir_freecount +
(XFS_INODES_PER_CHUNK - irec.ir_count);
if (real_freecount != xfs_scrub_iallocbt_freecount(irec.ir_free))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
if (real_freecount != xchk_iallocbt_freecount(irec.ir_free))
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
agino = irec.ir_startino;
/* Record has to be properly aligned within the AG. */
if (!xfs_verify_agino(mp, agno, agino) ||
!xfs_verify_agino(mp, agno, agino + XFS_INODES_PER_CHUNK - 1)) {
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
goto out;
}
@ -308,7 +308,7 @@ xfs_scrub_iallocbt_rec(
agbno = XFS_AGINO_TO_AGBNO(mp, irec.ir_startino);
if ((agbno & (xfs_ialloc_cluster_alignment(mp) - 1)) ||
(agbno & (xfs_icluster_size_fsb(mp) - 1)))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
*inode_blocks += XFS_B_TO_FSB(mp,
irec.ir_count * mp->m_sb.sb_inodesize);
@ -318,9 +318,9 @@ xfs_scrub_iallocbt_rec(
len = XFS_B_TO_FSB(mp,
XFS_INODES_PER_CHUNK * mp->m_sb.sb_inodesize);
if (irec.ir_count != XFS_INODES_PER_CHUNK)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
if (!xfs_scrub_iallocbt_chunk(bs, &irec, agino, len))
if (!xchk_iallocbt_chunk(bs, &irec, agino, len))
goto out;
goto check_freemask;
}
@ -333,12 +333,12 @@ xfs_scrub_iallocbt_rec(
holes = ~xfs_inobt_irec_to_allocmask(&irec);
if ((holes & irec.ir_free) != holes ||
irec.ir_freecount > irec.ir_count)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
for (i = 0; i < XFS_INOBT_HOLEMASK_BITS; i++) {
if (holemask & 1)
holecount += XFS_INODES_PER_HOLEMASK_BIT;
else if (!xfs_scrub_iallocbt_chunk(bs, &irec, agino, len))
else if (!xchk_iallocbt_chunk(bs, &irec, agino, len))
break;
holemask >>= 1;
agino += XFS_INODES_PER_HOLEMASK_BIT;
@ -346,10 +346,10 @@ xfs_scrub_iallocbt_rec(
if (holecount > XFS_INODES_PER_CHUNK ||
holecount + irec.ir_count != XFS_INODES_PER_CHUNK)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
check_freemask:
error = xfs_scrub_iallocbt_check_freemask(bs, &irec);
error = xchk_iallocbt_check_freemask(bs, &irec);
if (error)
goto out;
@ -362,39 +362,39 @@ out:
* Don't bother if we're missing btree cursors, as we're already corrupt.
*/
STATIC void
xfs_scrub_iallocbt_xref_rmap_btreeblks(
struct xfs_scrub_context *sc,
int which)
xchk_iallocbt_xref_rmap_btreeblks(
struct xfs_scrub *sc,
int which)
{
struct xfs_owner_info oinfo;
xfs_filblks_t blocks;
xfs_extlen_t inobt_blocks = 0;
xfs_extlen_t finobt_blocks = 0;
int error;
struct xfs_owner_info oinfo;
xfs_filblks_t blocks;
xfs_extlen_t inobt_blocks = 0;
xfs_extlen_t finobt_blocks = 0;
int error;
if (!sc->sa.ino_cur || !sc->sa.rmap_cur ||
(xfs_sb_version_hasfinobt(&sc->mp->m_sb) && !sc->sa.fino_cur) ||
xfs_scrub_skip_xref(sc->sm))
xchk_skip_xref(sc->sm))
return;
/* Check that we saw as many inobt blocks as the rmap says. */
error = xfs_btree_count_blocks(sc->sa.ino_cur, &inobt_blocks);
if (!xfs_scrub_process_error(sc, 0, 0, &error))
if (!xchk_process_error(sc, 0, 0, &error))
return;
if (sc->sa.fino_cur) {
error = xfs_btree_count_blocks(sc->sa.fino_cur, &finobt_blocks);
if (!xfs_scrub_process_error(sc, 0, 0, &error))
if (!xchk_process_error(sc, 0, 0, &error))
return;
}
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INOBT);
error = xfs_scrub_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, &oinfo,
error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, &oinfo,
&blocks);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.rmap_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur))
return;
if (blocks != inobt_blocks + finobt_blocks)
xfs_scrub_btree_set_corrupt(sc, sc->sa.ino_cur, 0);
xchk_btree_set_corrupt(sc, sc->sa.ino_cur, 0);
}
/*
@ -402,47 +402,47 @@ xfs_scrub_iallocbt_xref_rmap_btreeblks(
* the rmap says are owned by inodes.
*/
STATIC void
xfs_scrub_iallocbt_xref_rmap_inodes(
struct xfs_scrub_context *sc,
int which,
xfs_filblks_t inode_blocks)
xchk_iallocbt_xref_rmap_inodes(
struct xfs_scrub *sc,
int which,
xfs_filblks_t inode_blocks)
{
struct xfs_owner_info oinfo;
xfs_filblks_t blocks;
int error;
struct xfs_owner_info oinfo;
xfs_filblks_t blocks;
int error;
if (!sc->sa.rmap_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.rmap_cur || xchk_skip_xref(sc->sm))
return;
/* Check that we saw as many inode blocks as the rmap knows about. */
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES);
error = xfs_scrub_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, &oinfo,
error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, &oinfo,
&blocks);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.rmap_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur))
return;
if (blocks != inode_blocks)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
}
/* Scrub the inode btrees for some AG. */
STATIC int
xfs_scrub_iallocbt(
struct xfs_scrub_context *sc,
xfs_btnum_t which)
xchk_iallocbt(
struct xfs_scrub *sc,
xfs_btnum_t which)
{
struct xfs_btree_cur *cur;
struct xfs_owner_info oinfo;
xfs_filblks_t inode_blocks = 0;
int error;
struct xfs_btree_cur *cur;
struct xfs_owner_info oinfo;
xfs_filblks_t inode_blocks = 0;
int error;
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INOBT);
cur = which == XFS_BTNUM_INO ? sc->sa.ino_cur : sc->sa.fino_cur;
error = xfs_scrub_btree(sc, cur, xfs_scrub_iallocbt_rec, &oinfo,
error = xchk_btree(sc, cur, xchk_iallocbt_rec, &oinfo,
&inode_blocks);
if (error)
return error;
xfs_scrub_iallocbt_xref_rmap_btreeblks(sc, which);
xchk_iallocbt_xref_rmap_btreeblks(sc, which);
/*
* If we're scrubbing the inode btree, inode_blocks is the number of
@ -452,64 +452,64 @@ xfs_scrub_iallocbt(
* to inode chunks with free inodes.
*/
if (which == XFS_BTNUM_INO)
xfs_scrub_iallocbt_xref_rmap_inodes(sc, which, inode_blocks);
xchk_iallocbt_xref_rmap_inodes(sc, which, inode_blocks);
return error;
}
int
xfs_scrub_inobt(
struct xfs_scrub_context *sc)
xchk_inobt(
struct xfs_scrub *sc)
{
return xfs_scrub_iallocbt(sc, XFS_BTNUM_INO);
return xchk_iallocbt(sc, XFS_BTNUM_INO);
}
int
xfs_scrub_finobt(
struct xfs_scrub_context *sc)
xchk_finobt(
struct xfs_scrub *sc)
{
return xfs_scrub_iallocbt(sc, XFS_BTNUM_FINO);
return xchk_iallocbt(sc, XFS_BTNUM_FINO);
}
/* See if an inode btree has (or doesn't have) an inode chunk record. */
static inline void
xfs_scrub_xref_inode_check(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno,
xfs_extlen_t len,
struct xfs_btree_cur **icur,
bool should_have_inodes)
xchk_xref_inode_check(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len,
struct xfs_btree_cur **icur,
bool should_have_inodes)
{
bool has_inodes;
int error;
bool has_inodes;
int error;
if (!(*icur) || xfs_scrub_skip_xref(sc->sm))
if (!(*icur) || xchk_skip_xref(sc->sm))
return;
error = xfs_ialloc_has_inodes_at_extent(*icur, agbno, len, &has_inodes);
if (!xfs_scrub_should_check_xref(sc, &error, icur))
if (!xchk_should_check_xref(sc, &error, icur))
return;
if (has_inodes != should_have_inodes)
xfs_scrub_btree_xref_set_corrupt(sc, *icur, 0);
xchk_btree_xref_set_corrupt(sc, *icur, 0);
}
/* xref check that the extent is not covered by inodes */
void
xfs_scrub_xref_is_not_inode_chunk(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
xchk_xref_is_not_inode_chunk(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
{
xfs_scrub_xref_inode_check(sc, agbno, len, &sc->sa.ino_cur, false);
xfs_scrub_xref_inode_check(sc, agbno, len, &sc->sa.fino_cur, false);
xchk_xref_inode_check(sc, agbno, len, &sc->sa.ino_cur, false);
xchk_xref_inode_check(sc, agbno, len, &sc->sa.fino_cur, false);
}
/* xref check that the extent is covered by inodes */
void
xfs_scrub_xref_is_inode_chunk(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
xchk_xref_is_inode_chunk(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
{
xfs_scrub_xref_inode_check(sc, agbno, len, &sc->sa.ino_cur, true);
xchk_xref_inode_check(sc, agbno, len, &sc->sa.ino_cur, true);
}

Просмотреть файл

@ -37,23 +37,23 @@
* the goal.
*/
int
xfs_scrub_setup_inode(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_inode(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
int error;
int error;
/*
* Try to get the inode. If the verifiers fail, we try again
* in raw mode.
*/
error = xfs_scrub_get_inode(sc, ip);
error = xchk_get_inode(sc, ip);
switch (error) {
case 0:
break;
case -EFSCORRUPTED:
case -EFSBADCRC:
return xfs_scrub_trans_alloc(sc, 0);
return xchk_trans_alloc(sc, 0);
default:
return error;
}
@ -61,7 +61,7 @@ xfs_scrub_setup_inode(
/* Got the inode, lock it and we're ready to go. */
sc->ilock_flags = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL;
xfs_ilock(sc->ip, sc->ilock_flags);
error = xfs_scrub_trans_alloc(sc, 0);
error = xchk_trans_alloc(sc, 0);
if (error)
goto out;
sc->ilock_flags |= XFS_ILOCK_EXCL;
@ -76,19 +76,19 @@ out:
/* Validate di_extsize hint. */
STATIC void
xfs_scrub_inode_extsize(
struct xfs_scrub_context *sc,
struct xfs_dinode *dip,
xfs_ino_t ino,
uint16_t mode,
uint16_t flags)
xchk_inode_extsize(
struct xfs_scrub *sc,
struct xfs_dinode *dip,
xfs_ino_t ino,
uint16_t mode,
uint16_t flags)
{
xfs_failaddr_t fa;
xfs_failaddr_t fa;
fa = xfs_inode_validate_extsize(sc->mp, be32_to_cpu(dip->di_extsize),
mode, flags);
if (fa)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
}
/*
@ -98,33 +98,33 @@ xfs_scrub_inode_extsize(
* These functions must be kept in sync with each other.
*/
STATIC void
xfs_scrub_inode_cowextsize(
struct xfs_scrub_context *sc,
struct xfs_dinode *dip,
xfs_ino_t ino,
uint16_t mode,
uint16_t flags,
uint64_t flags2)
xchk_inode_cowextsize(
struct xfs_scrub *sc,
struct xfs_dinode *dip,
xfs_ino_t ino,
uint16_t mode,
uint16_t flags,
uint64_t flags2)
{
xfs_failaddr_t fa;
xfs_failaddr_t fa;
fa = xfs_inode_validate_cowextsize(sc->mp,
be32_to_cpu(dip->di_cowextsize), mode, flags,
flags2);
if (fa)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
}
/* Make sure the di_flags make sense for the inode. */
STATIC void
xfs_scrub_inode_flags(
struct xfs_scrub_context *sc,
struct xfs_dinode *dip,
xfs_ino_t ino,
uint16_t mode,
uint16_t flags)
xchk_inode_flags(
struct xfs_scrub *sc,
struct xfs_dinode *dip,
xfs_ino_t ino,
uint16_t mode,
uint16_t flags)
{
struct xfs_mount *mp = sc->mp;
struct xfs_mount *mp = sc->mp;
if (flags & ~XFS_DIFLAG_ANY)
goto bad;
@ -157,20 +157,20 @@ xfs_scrub_inode_flags(
return;
bad:
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
}
/* Make sure the di_flags2 make sense for the inode. */
STATIC void
xfs_scrub_inode_flags2(
struct xfs_scrub_context *sc,
struct xfs_dinode *dip,
xfs_ino_t ino,
uint16_t mode,
uint16_t flags,
uint64_t flags2)
xchk_inode_flags2(
struct xfs_scrub *sc,
struct xfs_dinode *dip,
xfs_ino_t ino,
uint16_t mode,
uint16_t flags,
uint64_t flags2)
{
struct xfs_mount *mp = sc->mp;
struct xfs_mount *mp = sc->mp;
if (flags2 & ~XFS_DIFLAG2_ANY)
goto bad;
@ -200,23 +200,23 @@ xfs_scrub_inode_flags2(
return;
bad:
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
}
/* Scrub all the ondisk inode fields. */
STATIC void
xfs_scrub_dinode(
struct xfs_scrub_context *sc,
struct xfs_dinode *dip,
xfs_ino_t ino)
xchk_dinode(
struct xfs_scrub *sc,
struct xfs_dinode *dip,
xfs_ino_t ino)
{
struct xfs_mount *mp = sc->mp;
size_t fork_recs;
unsigned long long isize;
uint64_t flags2;
uint32_t nextents;
uint16_t flags;
uint16_t mode;
struct xfs_mount *mp = sc->mp;
size_t fork_recs;
unsigned long long isize;
uint64_t flags2;
uint32_t nextents;
uint16_t flags;
uint16_t mode;
flags = be16_to_cpu(dip->di_flags);
if (dip->di_version >= 3)
@ -237,7 +237,7 @@ xfs_scrub_dinode(
/* mode is recognized */
break;
default:
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
}
@ -248,22 +248,22 @@ xfs_scrub_dinode(
* We autoconvert v1 inodes into v2 inodes on writeout,
* so just mark this inode for preening.
*/
xfs_scrub_ino_set_preen(sc, ino);
xchk_ino_set_preen(sc, ino);
break;
case 2:
case 3:
if (dip->di_onlink != 0)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
if (dip->di_mode == 0 && sc->ip)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
if (dip->di_projid_hi != 0 &&
!xfs_sb_version_hasprojid32bit(&mp->m_sb))
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
default:
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
return;
}
@ -273,40 +273,40 @@ xfs_scrub_dinode(
*/
if (dip->di_uid == cpu_to_be32(-1U) ||
dip->di_gid == cpu_to_be32(-1U))
xfs_scrub_ino_set_warning(sc, ino);
xchk_ino_set_warning(sc, ino);
/* di_format */
switch (dip->di_format) {
case XFS_DINODE_FMT_DEV:
if (!S_ISCHR(mode) && !S_ISBLK(mode) &&
!S_ISFIFO(mode) && !S_ISSOCK(mode))
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
case XFS_DINODE_FMT_LOCAL:
if (!S_ISDIR(mode) && !S_ISLNK(mode))
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
case XFS_DINODE_FMT_EXTENTS:
if (!S_ISREG(mode) && !S_ISDIR(mode) && !S_ISLNK(mode))
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
case XFS_DINODE_FMT_BTREE:
if (!S_ISREG(mode) && !S_ISDIR(mode))
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
case XFS_DINODE_FMT_UUID:
default:
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
}
/* di_[amc]time.nsec */
if (be32_to_cpu(dip->di_atime.t_nsec) >= NSEC_PER_SEC)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
if (be32_to_cpu(dip->di_mtime.t_nsec) >= NSEC_PER_SEC)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
if (be32_to_cpu(dip->di_ctime.t_nsec) >= NSEC_PER_SEC)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
/*
* di_size. xfs_dinode_verify checks for things that screw up
@ -315,19 +315,19 @@ xfs_scrub_dinode(
*/
isize = be64_to_cpu(dip->di_size);
if (isize & (1ULL << 63))
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
/* Devices, fifos, and sockets must have zero size */
if (!S_ISDIR(mode) && !S_ISREG(mode) && !S_ISLNK(mode) && isize != 0)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
/* Directories can't be larger than the data section size (32G) */
if (S_ISDIR(mode) && (isize == 0 || isize >= XFS_DIR2_SPACE_SIZE))
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
/* Symlinks can't be larger than SYMLINK_MAXLEN */
if (S_ISLNK(mode) && (isize == 0 || isize >= XFS_SYMLINK_MAXLEN))
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
/*
* Warn if the running kernel can't handle the kinds of offsets
@ -336,7 +336,7 @@ xfs_scrub_dinode(
* overly large offsets, flag the inode for admin review.
*/
if (isize >= mp->m_super->s_maxbytes)
xfs_scrub_ino_set_warning(sc, ino);
xchk_ino_set_warning(sc, ino);
/* di_nblocks */
if (flags2 & XFS_DIFLAG2_REFLINK) {
@ -351,15 +351,15 @@ xfs_scrub_dinode(
*/
if (be64_to_cpu(dip->di_nblocks) >=
mp->m_sb.sb_dblocks + mp->m_sb.sb_rblocks)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
} else {
if (be64_to_cpu(dip->di_nblocks) >= mp->m_sb.sb_dblocks)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
}
xfs_scrub_inode_flags(sc, dip, ino, mode, flags);
xchk_inode_flags(sc, dip, ino, mode, flags);
xfs_scrub_inode_extsize(sc, dip, ino, mode, flags);
xchk_inode_extsize(sc, dip, ino, mode, flags);
/* di_nextents */
nextents = be32_to_cpu(dip->di_nextents);
@ -367,31 +367,31 @@ xfs_scrub_dinode(
switch (dip->di_format) {
case XFS_DINODE_FMT_EXTENTS:
if (nextents > fork_recs)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
case XFS_DINODE_FMT_BTREE:
if (nextents <= fork_recs)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
default:
if (nextents != 0)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
}
/* di_forkoff */
if (XFS_DFORK_APTR(dip) >= (char *)dip + mp->m_sb.sb_inodesize)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
if (dip->di_anextents != 0 && dip->di_forkoff == 0)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
if (dip->di_forkoff == 0 && dip->di_aformat != XFS_DINODE_FMT_EXTENTS)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
/* di_aformat */
if (dip->di_aformat != XFS_DINODE_FMT_LOCAL &&
dip->di_aformat != XFS_DINODE_FMT_EXTENTS &&
dip->di_aformat != XFS_DINODE_FMT_BTREE)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
/* di_anextents */
nextents = be16_to_cpu(dip->di_anextents);
@ -399,22 +399,22 @@ xfs_scrub_dinode(
switch (dip->di_aformat) {
case XFS_DINODE_FMT_EXTENTS:
if (nextents > fork_recs)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
case XFS_DINODE_FMT_BTREE:
if (nextents <= fork_recs)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
break;
default:
if (nextents != 0)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
}
if (dip->di_version >= 3) {
if (be32_to_cpu(dip->di_crtime.t_nsec) >= NSEC_PER_SEC)
xfs_scrub_ino_set_corrupt(sc, ino);
xfs_scrub_inode_flags2(sc, dip, ino, mode, flags, flags2);
xfs_scrub_inode_cowextsize(sc, dip, ino, mode, flags,
xchk_ino_set_corrupt(sc, ino);
xchk_inode_flags2(sc, dip, ino, mode, flags, flags2);
xchk_inode_cowextsize(sc, dip, ino, mode, flags,
flags2);
}
}
@ -425,8 +425,8 @@ xfs_scrub_dinode(
* IGET_UNTRUSTED, which checks the inobt for us.
*/
static void
xfs_scrub_inode_xref_finobt(
struct xfs_scrub_context *sc,
xchk_inode_xref_finobt(
struct xfs_scrub *sc,
xfs_ino_t ino)
{
struct xfs_inobt_rec_incore rec;
@ -434,7 +434,7 @@ xfs_scrub_inode_xref_finobt(
int has_record;
int error;
if (!sc->sa.fino_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.fino_cur || xchk_skip_xref(sc->sm))
return;
agino = XFS_INO_TO_AGINO(sc->mp, ino);
@ -445,12 +445,12 @@ xfs_scrub_inode_xref_finobt(
*/
error = xfs_inobt_lookup(sc->sa.fino_cur, agino, XFS_LOOKUP_LE,
&has_record);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.fino_cur) ||
if (!xchk_should_check_xref(sc, &error, &sc->sa.fino_cur) ||
!has_record)
return;
error = xfs_inobt_get_rec(sc->sa.fino_cur, &rec, &has_record);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.fino_cur) ||
if (!xchk_should_check_xref(sc, &error, &sc->sa.fino_cur) ||
!has_record)
return;
@ -463,54 +463,54 @@ xfs_scrub_inode_xref_finobt(
return;
if (rec.ir_free & XFS_INOBT_MASK(agino - rec.ir_startino))
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.fino_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.fino_cur, 0);
}
/* Cross reference the inode fields with the forks. */
STATIC void
xfs_scrub_inode_xref_bmap(
struct xfs_scrub_context *sc,
struct xfs_dinode *dip)
xchk_inode_xref_bmap(
struct xfs_scrub *sc,
struct xfs_dinode *dip)
{
xfs_extnum_t nextents;
xfs_filblks_t count;
xfs_filblks_t acount;
int error;
xfs_extnum_t nextents;
xfs_filblks_t count;
xfs_filblks_t acount;
int error;
if (xfs_scrub_skip_xref(sc->sm))
if (xchk_skip_xref(sc->sm))
return;
/* Walk all the extents to check nextents/naextents/nblocks. */
error = xfs_bmap_count_blocks(sc->tp, sc->ip, XFS_DATA_FORK,
&nextents, &count);
if (!xfs_scrub_should_check_xref(sc, &error, NULL))
if (!xchk_should_check_xref(sc, &error, NULL))
return;
if (nextents < be32_to_cpu(dip->di_nextents))
xfs_scrub_ino_xref_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_xref_set_corrupt(sc, sc->ip->i_ino);
error = xfs_bmap_count_blocks(sc->tp, sc->ip, XFS_ATTR_FORK,
&nextents, &acount);
if (!xfs_scrub_should_check_xref(sc, &error, NULL))
if (!xchk_should_check_xref(sc, &error, NULL))
return;
if (nextents != be16_to_cpu(dip->di_anextents))
xfs_scrub_ino_xref_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_xref_set_corrupt(sc, sc->ip->i_ino);
/* Check nblocks against the inode. */
if (count + acount != be64_to_cpu(dip->di_nblocks))
xfs_scrub_ino_xref_set_corrupt(sc, sc->ip->i_ino);
xchk_ino_xref_set_corrupt(sc, sc->ip->i_ino);
}
/* Cross-reference with the other btrees. */
STATIC void
xfs_scrub_inode_xref(
struct xfs_scrub_context *sc,
xfs_ino_t ino,
struct xfs_dinode *dip)
xchk_inode_xref(
struct xfs_scrub *sc,
xfs_ino_t ino,
struct xfs_dinode *dip)
{
struct xfs_owner_info oinfo;
xfs_agnumber_t agno;
xfs_agblock_t agbno;
int error;
struct xfs_owner_info oinfo;
xfs_agnumber_t agno;
xfs_agblock_t agbno;
int error;
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return;
@ -518,18 +518,18 @@ xfs_scrub_inode_xref(
agno = XFS_INO_TO_AGNO(sc->mp, ino);
agbno = XFS_INO_TO_AGBNO(sc->mp, ino);
error = xfs_scrub_ag_init(sc, agno, &sc->sa);
if (!xfs_scrub_xref_process_error(sc, agno, agbno, &error))
error = xchk_ag_init(sc, agno, &sc->sa);
if (!xchk_xref_process_error(sc, agno, agbno, &error))
return;
xfs_scrub_xref_is_used_space(sc, agbno, 1);
xfs_scrub_inode_xref_finobt(sc, ino);
xchk_xref_is_used_space(sc, agbno, 1);
xchk_inode_xref_finobt(sc, ino);
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INODES);
xfs_scrub_xref_is_owned_by(sc, agbno, 1, &oinfo);
xfs_scrub_xref_is_not_shared(sc, agbno, 1);
xfs_scrub_inode_xref_bmap(sc, dip);
xchk_xref_is_owned_by(sc, agbno, 1, &oinfo);
xchk_xref_is_not_shared(sc, agbno, 1);
xchk_inode_xref_bmap(sc, dip);
xfs_scrub_ag_free(sc, &sc->sa);
xchk_ag_free(sc, &sc->sa);
}
/*
@ -539,35 +539,35 @@ xfs_scrub_inode_xref(
* reflink filesystem.
*/
static void
xfs_scrub_inode_check_reflink_iflag(
struct xfs_scrub_context *sc,
xfs_ino_t ino)
xchk_inode_check_reflink_iflag(
struct xfs_scrub *sc,
xfs_ino_t ino)
{
struct xfs_mount *mp = sc->mp;
bool has_shared;
int error;
struct xfs_mount *mp = sc->mp;
bool has_shared;
int error;
if (!xfs_sb_version_hasreflink(&mp->m_sb))
return;
error = xfs_reflink_inode_has_shared_extents(sc->tp, sc->ip,
&has_shared);
if (!xfs_scrub_xref_process_error(sc, XFS_INO_TO_AGNO(mp, ino),
if (!xchk_xref_process_error(sc, XFS_INO_TO_AGNO(mp, ino),
XFS_INO_TO_AGBNO(mp, ino), &error))
return;
if (xfs_is_reflink_inode(sc->ip) && !has_shared)
xfs_scrub_ino_set_preen(sc, ino);
xchk_ino_set_preen(sc, ino);
else if (!xfs_is_reflink_inode(sc->ip) && has_shared)
xfs_scrub_ino_set_corrupt(sc, ino);
xchk_ino_set_corrupt(sc, ino);
}
/* Scrub an inode. */
int
xfs_scrub_inode(
struct xfs_scrub_context *sc)
xchk_inode(
struct xfs_scrub *sc)
{
struct xfs_dinode di;
int error = 0;
struct xfs_dinode di;
int error = 0;
/*
* If sc->ip is NULL, that means that the setup function called
@ -575,13 +575,13 @@ xfs_scrub_inode(
* and a NULL inode, so flag the corruption error and return.
*/
if (!sc->ip) {
xfs_scrub_ino_set_corrupt(sc, sc->sm->sm_ino);
xchk_ino_set_corrupt(sc, sc->sm->sm_ino);
return 0;
}
/* Scrub the inode core. */
xfs_inode_to_disk(sc->ip, &di, 0);
xfs_scrub_dinode(sc, &di, sc->ip->i_ino);
xchk_dinode(sc, &di, sc->ip->i_ino);
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
goto out;
@ -591,9 +591,9 @@ xfs_scrub_inode(
* we scrubbed the dinode.
*/
if (S_ISREG(VFS_I(sc->ip)->i_mode))
xfs_scrub_inode_check_reflink_iflag(sc, sc->ip->i_ino);
xchk_inode_check_reflink_iflag(sc, sc->ip->i_ino);
xfs_scrub_inode_xref(sc, sc->ip->i_ino, &di);
xchk_inode_xref(sc, sc->ip->i_ino, &di);
out:
return error;
}

Просмотреть файл

@ -27,36 +27,36 @@
/* Set us up to scrub parents. */
int
xfs_scrub_setup_parent(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_parent(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
return xfs_scrub_setup_inode_contents(sc, ip, 0);
return xchk_setup_inode_contents(sc, ip, 0);
}
/* Parent pointers */
/* Look for an entry in a parent pointing to this inode. */
struct xfs_scrub_parent_ctx {
struct dir_context dc;
xfs_ino_t ino;
xfs_nlink_t nlink;
struct xchk_parent_ctx {
struct dir_context dc;
xfs_ino_t ino;
xfs_nlink_t nlink;
};
/* Look for a single entry in a directory pointing to an inode. */
STATIC int
xfs_scrub_parent_actor(
struct dir_context *dc,
const char *name,
int namelen,
loff_t pos,
u64 ino,
unsigned type)
xchk_parent_actor(
struct dir_context *dc,
const char *name,
int namelen,
loff_t pos,
u64 ino,
unsigned type)
{
struct xfs_scrub_parent_ctx *spc;
struct xchk_parent_ctx *spc;
spc = container_of(dc, struct xfs_scrub_parent_ctx, dc);
spc = container_of(dc, struct xchk_parent_ctx, dc);
if (spc->ino == ino)
spc->nlink++;
return 0;
@ -64,21 +64,21 @@ xfs_scrub_parent_actor(
/* Count the number of dentries in the parent dir that point to this inode. */
STATIC int
xfs_scrub_parent_count_parent_dentries(
struct xfs_scrub_context *sc,
struct xfs_inode *parent,
xfs_nlink_t *nlink)
xchk_parent_count_parent_dentries(
struct xfs_scrub *sc,
struct xfs_inode *parent,
xfs_nlink_t *nlink)
{
struct xfs_scrub_parent_ctx spc = {
.dc.actor = xfs_scrub_parent_actor,
struct xchk_parent_ctx spc = {
.dc.actor = xchk_parent_actor,
.dc.pos = 0,
.ino = sc->ip->i_ino,
.nlink = 0,
};
size_t bufsize;
loff_t oldpos;
uint lock_mode;
int error = 0;
size_t bufsize;
loff_t oldpos;
uint lock_mode;
int error = 0;
/*
* If there are any blocks, read-ahead block 0 as we're almost
@ -120,16 +120,16 @@ out:
* entry pointing back to the inode being scrubbed.
*/
STATIC int
xfs_scrub_parent_validate(
struct xfs_scrub_context *sc,
xfs_ino_t dnum,
bool *try_again)
xchk_parent_validate(
struct xfs_scrub *sc,
xfs_ino_t dnum,
bool *try_again)
{
struct xfs_mount *mp = sc->mp;
struct xfs_inode *dp = NULL;
xfs_nlink_t expected_nlink;
xfs_nlink_t nlink;
int error = 0;
struct xfs_mount *mp = sc->mp;
struct xfs_inode *dp = NULL;
xfs_nlink_t expected_nlink;
xfs_nlink_t nlink;
int error = 0;
*try_again = false;
@ -138,7 +138,7 @@ xfs_scrub_parent_validate(
/* '..' must not point to ourselves. */
if (sc->ip->i_ino == dnum) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
goto out;
}
@ -165,13 +165,13 @@ xfs_scrub_parent_validate(
error = xfs_iget(mp, sc->tp, dnum, XFS_IGET_UNTRUSTED, 0, &dp);
if (error == -EINVAL) {
error = -EFSCORRUPTED;
xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, 0, &error);
xchk_fblock_process_error(sc, XFS_DATA_FORK, 0, &error);
goto out;
}
if (!xfs_scrub_fblock_xref_process_error(sc, XFS_DATA_FORK, 0, &error))
if (!xchk_fblock_xref_process_error(sc, XFS_DATA_FORK, 0, &error))
goto out;
if (dp == sc->ip || !S_ISDIR(VFS_I(dp)->i_mode)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
goto out_rele;
}
@ -183,12 +183,12 @@ xfs_scrub_parent_validate(
* the child inodes.
*/
if (xfs_ilock_nowait(dp, XFS_IOLOCK_SHARED)) {
error = xfs_scrub_parent_count_parent_dentries(sc, dp, &nlink);
if (!xfs_scrub_fblock_xref_process_error(sc, XFS_DATA_FORK, 0,
error = xchk_parent_count_parent_dentries(sc, dp, &nlink);
if (!xchk_fblock_xref_process_error(sc, XFS_DATA_FORK, 0,
&error))
goto out_unlock;
if (nlink != expected_nlink)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
goto out_unlock;
}
@ -200,18 +200,18 @@ xfs_scrub_parent_validate(
*/
xfs_iunlock(sc->ip, sc->ilock_flags);
sc->ilock_flags = 0;
error = xfs_scrub_ilock_inverted(dp, XFS_IOLOCK_SHARED);
error = xchk_ilock_inverted(dp, XFS_IOLOCK_SHARED);
if (error)
goto out_rele;
/* Go looking for our dentry. */
error = xfs_scrub_parent_count_parent_dentries(sc, dp, &nlink);
if (!xfs_scrub_fblock_xref_process_error(sc, XFS_DATA_FORK, 0, &error))
error = xchk_parent_count_parent_dentries(sc, dp, &nlink);
if (!xchk_fblock_xref_process_error(sc, XFS_DATA_FORK, 0, &error))
goto out_unlock;
/* Drop the parent lock, relock this inode. */
xfs_iunlock(dp, XFS_IOLOCK_SHARED);
error = xfs_scrub_ilock_inverted(sc->ip, XFS_IOLOCK_EXCL);
error = xchk_ilock_inverted(sc->ip, XFS_IOLOCK_EXCL);
if (error)
goto out_rele;
sc->ilock_flags = XFS_IOLOCK_EXCL;
@ -225,43 +225,43 @@ xfs_scrub_parent_validate(
/* Look up '..' to see if the inode changed. */
error = xfs_dir_lookup(sc->tp, sc->ip, &xfs_name_dotdot, &dnum, NULL);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, 0, &error))
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, 0, &error))
goto out_rele;
/* Drat, parent changed. Try again! */
if (dnum != dp->i_ino) {
iput(VFS_I(dp));
xfs_irele(dp);
*try_again = true;
return 0;
}
iput(VFS_I(dp));
xfs_irele(dp);
/*
* '..' didn't change, so check that there was only one entry
* for us in the parent.
*/
if (nlink != expected_nlink)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
return error;
out_unlock:
xfs_iunlock(dp, XFS_IOLOCK_SHARED);
out_rele:
iput(VFS_I(dp));
xfs_irele(dp);
out:
return error;
}
/* Scrub a parent pointer. */
int
xfs_scrub_parent(
struct xfs_scrub_context *sc)
xchk_parent(
struct xfs_scrub *sc)
{
struct xfs_mount *mp = sc->mp;
xfs_ino_t dnum;
bool try_again;
int tries = 0;
int error = 0;
struct xfs_mount *mp = sc->mp;
xfs_ino_t dnum;
bool try_again;
int tries = 0;
int error = 0;
/*
* If we're a directory, check that the '..' link points up to
@ -272,7 +272,7 @@ xfs_scrub_parent(
/* We're not a special inode, are we? */
if (!xfs_verify_dir_ino(mp, sc->ip->i_ino)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
goto out;
}
@ -288,10 +288,10 @@ xfs_scrub_parent(
/* Look up '..' */
error = xfs_dir_lookup(sc->tp, sc->ip, &xfs_name_dotdot, &dnum, NULL);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, 0, &error))
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, 0, &error))
goto out;
if (!xfs_verify_dir_ino(mp, dnum)) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
goto out;
}
@ -299,12 +299,12 @@ xfs_scrub_parent(
if (sc->ip == mp->m_rootip) {
if (sc->ip->i_ino != mp->m_sb.sb_rootino ||
sc->ip->i_ino != dnum)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
goto out;
}
do {
error = xfs_scrub_parent_validate(sc, dnum, &try_again);
error = xchk_parent_validate(sc, dnum, &try_again);
if (error)
goto out;
} while (try_again && ++tries < 20);
@ -314,7 +314,7 @@ xfs_scrub_parent(
* incomplete. Userspace can decide if it wants to try again.
*/
if (try_again && tries == 20)
xfs_scrub_set_incomplete(sc);
xchk_set_incomplete(sc);
out:
/*
* If we failed to lock the parent inode even after a retry, just mark
@ -322,7 +322,7 @@ out:
*/
if (sc->try_harder && error == -EDEADLOCK) {
error = 0;
xfs_scrub_set_incomplete(sc);
xchk_set_incomplete(sc);
}
return error;
}

Просмотреть файл

@ -30,8 +30,8 @@
/* Convert a scrub type code to a DQ flag, or return 0 if error. */
static inline uint
xfs_scrub_quota_to_dqtype(
struct xfs_scrub_context *sc)
xchk_quota_to_dqtype(
struct xfs_scrub *sc)
{
switch (sc->sm->sm_type) {
case XFS_SCRUB_TYPE_UQUOTA:
@ -47,24 +47,24 @@ xfs_scrub_quota_to_dqtype(
/* Set us up to scrub a quota. */
int
xfs_scrub_setup_quota(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_quota(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
uint dqtype;
int error;
uint dqtype;
int error;
if (!XFS_IS_QUOTA_RUNNING(sc->mp) || !XFS_IS_QUOTA_ON(sc->mp))
return -ENOENT;
dqtype = xfs_scrub_quota_to_dqtype(sc);
dqtype = xchk_quota_to_dqtype(sc);
if (dqtype == 0)
return -EINVAL;
sc->has_quotaofflock = true;
mutex_lock(&sc->mp->m_quotainfo->qi_quotaofflock);
if (!xfs_this_quota_on(sc->mp, dqtype))
return -ENOENT;
error = xfs_scrub_setup_fs(sc, ip);
error = xchk_setup_fs(sc, ip);
if (error)
return error;
sc->ip = xfs_quota_inode(sc->mp, dqtype);
@ -75,35 +75,35 @@ xfs_scrub_setup_quota(
/* Quotas. */
struct xfs_scrub_quota_info {
struct xfs_scrub_context *sc;
xfs_dqid_t last_id;
struct xchk_quota_info {
struct xfs_scrub *sc;
xfs_dqid_t last_id;
};
/* Scrub the fields in an individual quota item. */
STATIC int
xfs_scrub_quota_item(
struct xfs_dquot *dq,
uint dqtype,
void *priv)
xchk_quota_item(
struct xfs_dquot *dq,
uint dqtype,
void *priv)
{
struct xfs_scrub_quota_info *sqi = priv;
struct xfs_scrub_context *sc = sqi->sc;
struct xfs_mount *mp = sc->mp;
struct xfs_disk_dquot *d = &dq->q_core;
struct xfs_quotainfo *qi = mp->m_quotainfo;
xfs_fileoff_t offset;
unsigned long long bsoft;
unsigned long long isoft;
unsigned long long rsoft;
unsigned long long bhard;
unsigned long long ihard;
unsigned long long rhard;
unsigned long long bcount;
unsigned long long icount;
unsigned long long rcount;
xfs_ino_t fs_icount;
xfs_dqid_t id = be32_to_cpu(d->d_id);
struct xchk_quota_info *sqi = priv;
struct xfs_scrub *sc = sqi->sc;
struct xfs_mount *mp = sc->mp;
struct xfs_disk_dquot *d = &dq->q_core;
struct xfs_quotainfo *qi = mp->m_quotainfo;
xfs_fileoff_t offset;
unsigned long long bsoft;
unsigned long long isoft;
unsigned long long rsoft;
unsigned long long bhard;
unsigned long long ihard;
unsigned long long rhard;
unsigned long long bcount;
unsigned long long icount;
unsigned long long rcount;
xfs_ino_t fs_icount;
xfs_dqid_t id = be32_to_cpu(d->d_id);
/*
* Except for the root dquot, the actual dquot we got must either have
@ -111,16 +111,16 @@ xfs_scrub_quota_item(
*/
offset = id / qi->qi_dqperchunk;
if (id && id <= sqi->last_id)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
sqi->last_id = id;
/* Did we get the dquot type we wanted? */
if (dqtype != (d->d_flags & XFS_DQ_ALLTYPES))
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
if (d->d_pad0 != cpu_to_be32(0) || d->d_pad != cpu_to_be16(0))
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
/* Check the limits. */
bhard = be64_to_cpu(d->d_blk_hardlimit);
@ -140,19 +140,19 @@ xfs_scrub_quota_item(
* the hard limit.
*/
if (bhard > mp->m_sb.sb_dblocks)
xfs_scrub_fblock_set_warning(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_warning(sc, XFS_DATA_FORK, offset);
if (bsoft > bhard)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
if (ihard > mp->m_maxicount)
xfs_scrub_fblock_set_warning(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_warning(sc, XFS_DATA_FORK, offset);
if (isoft > ihard)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
if (rhard > mp->m_sb.sb_rblocks)
xfs_scrub_fblock_set_warning(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_warning(sc, XFS_DATA_FORK, offset);
if (rsoft > rhard)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
/* Check the resource counts. */
bcount = be64_to_cpu(d->d_bcount);
@ -167,15 +167,15 @@ xfs_scrub_quota_item(
*/
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
if (mp->m_sb.sb_dblocks < bcount)
xfs_scrub_fblock_set_warning(sc, XFS_DATA_FORK,
xchk_fblock_set_warning(sc, XFS_DATA_FORK,
offset);
} else {
if (mp->m_sb.sb_dblocks < bcount)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK,
offset);
}
if (icount > fs_icount || rcount > mp->m_sb.sb_rblocks)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, offset);
/*
* We can violate the hard limits if the admin suddenly sets a
@ -183,29 +183,29 @@ xfs_scrub_quota_item(
* admin review.
*/
if (id != 0 && bhard != 0 && bcount > bhard)
xfs_scrub_fblock_set_warning(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_warning(sc, XFS_DATA_FORK, offset);
if (id != 0 && ihard != 0 && icount > ihard)
xfs_scrub_fblock_set_warning(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_warning(sc, XFS_DATA_FORK, offset);
if (id != 0 && rhard != 0 && rcount > rhard)
xfs_scrub_fblock_set_warning(sc, XFS_DATA_FORK, offset);
xchk_fblock_set_warning(sc, XFS_DATA_FORK, offset);
return 0;
}
/* Check the quota's data fork. */
STATIC int
xfs_scrub_quota_data_fork(
struct xfs_scrub_context *sc)
xchk_quota_data_fork(
struct xfs_scrub *sc)
{
struct xfs_bmbt_irec irec = { 0 };
struct xfs_iext_cursor icur;
struct xfs_quotainfo *qi = sc->mp->m_quotainfo;
struct xfs_ifork *ifp;
xfs_fileoff_t max_dqid_off;
int error = 0;
struct xfs_bmbt_irec irec = { 0 };
struct xfs_iext_cursor icur;
struct xfs_quotainfo *qi = sc->mp->m_quotainfo;
struct xfs_ifork *ifp;
xfs_fileoff_t max_dqid_off;
int error = 0;
/* Invoke the fork scrubber. */
error = xfs_scrub_metadata_inode_forks(sc);
error = xchk_metadata_inode_forks(sc);
if (error || (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
return error;
@ -213,7 +213,7 @@ xfs_scrub_quota_data_fork(
max_dqid_off = ((xfs_dqid_t)-1) / qi->qi_dqperchunk;
ifp = XFS_IFORK_PTR(sc->ip, XFS_DATA_FORK);
for_each_xfs_iext(ifp, &icur, &irec) {
if (xfs_scrub_should_terminate(sc, &error))
if (xchk_should_terminate(sc, &error))
break;
/*
* delalloc extents or blocks mapped above the highest
@ -222,7 +222,7 @@ xfs_scrub_quota_data_fork(
if (isnullstartblock(irec.br_startblock) ||
irec.br_startoff > max_dqid_off ||
irec.br_startoff + irec.br_blockcount - 1 > max_dqid_off) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK,
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK,
irec.br_startoff);
break;
}
@ -233,19 +233,19 @@ xfs_scrub_quota_data_fork(
/* Scrub all of a quota type's items. */
int
xfs_scrub_quota(
struct xfs_scrub_context *sc)
xchk_quota(
struct xfs_scrub *sc)
{
struct xfs_scrub_quota_info sqi;
struct xfs_mount *mp = sc->mp;
struct xfs_quotainfo *qi = mp->m_quotainfo;
uint dqtype;
int error = 0;
struct xchk_quota_info sqi;
struct xfs_mount *mp = sc->mp;
struct xfs_quotainfo *qi = mp->m_quotainfo;
uint dqtype;
int error = 0;
dqtype = xfs_scrub_quota_to_dqtype(sc);
dqtype = xchk_quota_to_dqtype(sc);
/* Look for problem extents. */
error = xfs_scrub_quota_data_fork(sc);
error = xchk_quota_data_fork(sc);
if (error)
goto out;
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
@ -260,10 +260,10 @@ xfs_scrub_quota(
sc->ilock_flags = 0;
sqi.sc = sc;
sqi.last_id = 0;
error = xfs_qm_dqiterate(mp, dqtype, xfs_scrub_quota_item, &sqi);
error = xfs_qm_dqiterate(mp, dqtype, xchk_quota_item, &sqi);
sc->ilock_flags = XFS_ILOCK_EXCL;
xfs_ilock(sc->ip, sc->ilock_flags);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK,
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK,
sqi.last_id * qi->qi_dqperchunk, &error))
goto out;

Просмотреть файл

@ -28,11 +28,11 @@
* Set us up to scrub reference count btrees.
*/
int
xfs_scrub_setup_ag_refcountbt(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_ag_refcountbt(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
return xfs_scrub_setup_ag_btree(sc, ip, false);
return xchk_setup_ag_btree(sc, ip, false);
}
/* Reference count btree scrubber. */
@ -73,22 +73,22 @@ xfs_scrub_setup_ag_refcountbt(
* If the refcount is correct, all the check conditions in the algorithm
* should always hold true. If not, the refcount is incorrect.
*/
struct xfs_scrub_refcnt_frag {
struct list_head list;
struct xfs_rmap_irec rm;
struct xchk_refcnt_frag {
struct list_head list;
struct xfs_rmap_irec rm;
};
struct xfs_scrub_refcnt_check {
struct xfs_scrub_context *sc;
struct list_head fragments;
struct xchk_refcnt_check {
struct xfs_scrub *sc;
struct list_head fragments;
/* refcount extent we're examining */
xfs_agblock_t bno;
xfs_extlen_t len;
xfs_nlink_t refcount;
xfs_agblock_t bno;
xfs_extlen_t len;
xfs_nlink_t refcount;
/* number of owners seen */
xfs_nlink_t seen;
xfs_nlink_t seen;
};
/*
@ -99,18 +99,18 @@ struct xfs_scrub_refcnt_check {
* fragments as the refcountbt says we should have.
*/
STATIC int
xfs_scrub_refcountbt_rmap_check(
xchk_refcountbt_rmap_check(
struct xfs_btree_cur *cur,
struct xfs_rmap_irec *rec,
void *priv)
{
struct xfs_scrub_refcnt_check *refchk = priv;
struct xfs_scrub_refcnt_frag *frag;
struct xchk_refcnt_check *refchk = priv;
struct xchk_refcnt_frag *frag;
xfs_agblock_t rm_last;
xfs_agblock_t rc_last;
int error = 0;
if (xfs_scrub_should_terminate(refchk->sc, &error))
if (xchk_should_terminate(refchk->sc, &error))
return error;
rm_last = rec->rm_startblock + rec->rm_blockcount - 1;
@ -118,7 +118,7 @@ xfs_scrub_refcountbt_rmap_check(
/* Confirm that a single-owner refc extent is a CoW stage. */
if (refchk->refcount == 1 && rec->rm_owner != XFS_RMAP_OWN_COW) {
xfs_scrub_btree_xref_set_corrupt(refchk->sc, cur, 0);
xchk_btree_xref_set_corrupt(refchk->sc, cur, 0);
return 0;
}
@ -135,7 +135,7 @@ xfs_scrub_refcountbt_rmap_check(
* is healthy each rmap_irec we see will be in agbno order
* so we don't need insertion sort here.
*/
frag = kmem_alloc(sizeof(struct xfs_scrub_refcnt_frag),
frag = kmem_alloc(sizeof(struct xchk_refcnt_frag),
KM_MAYFAIL);
if (!frag)
return -ENOMEM;
@ -154,12 +154,12 @@ xfs_scrub_refcountbt_rmap_check(
* we have a refcountbt error.
*/
STATIC void
xfs_scrub_refcountbt_process_rmap_fragments(
struct xfs_scrub_refcnt_check *refchk)
xchk_refcountbt_process_rmap_fragments(
struct xchk_refcnt_check *refchk)
{
struct list_head worklist;
struct xfs_scrub_refcnt_frag *frag;
struct xfs_scrub_refcnt_frag *n;
struct xchk_refcnt_frag *frag;
struct xchk_refcnt_frag *n;
xfs_agblock_t bno;
xfs_agblock_t rbno;
xfs_agblock_t next_rbno;
@ -277,13 +277,13 @@ done:
/* Use the rmap entries covering this extent to verify the refcount. */
STATIC void
xfs_scrub_refcountbt_xref_rmap(
struct xfs_scrub_context *sc,
xchk_refcountbt_xref_rmap(
struct xfs_scrub *sc,
xfs_agblock_t bno,
xfs_extlen_t len,
xfs_nlink_t refcount)
{
struct xfs_scrub_refcnt_check refchk = {
struct xchk_refcnt_check refchk = {
.sc = sc,
.bno = bno,
.len = len,
@ -292,11 +292,11 @@ xfs_scrub_refcountbt_xref_rmap(
};
struct xfs_rmap_irec low;
struct xfs_rmap_irec high;
struct xfs_scrub_refcnt_frag *frag;
struct xfs_scrub_refcnt_frag *n;
struct xchk_refcnt_frag *frag;
struct xchk_refcnt_frag *n;
int error;
if (!sc->sa.rmap_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.rmap_cur || xchk_skip_xref(sc->sm))
return;
/* Cross-reference with the rmapbt to confirm the refcount. */
@ -307,13 +307,13 @@ xfs_scrub_refcountbt_xref_rmap(
INIT_LIST_HEAD(&refchk.fragments);
error = xfs_rmap_query_range(sc->sa.rmap_cur, &low, &high,
&xfs_scrub_refcountbt_rmap_check, &refchk);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.rmap_cur))
&xchk_refcountbt_rmap_check, &refchk);
if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur))
goto out_free;
xfs_scrub_refcountbt_process_rmap_fragments(&refchk);
xchk_refcountbt_process_rmap_fragments(&refchk);
if (refcount != refchk.seen)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
out_free:
list_for_each_entry_safe(frag, n, &refchk.fragments, list) {
@ -324,34 +324,34 @@ out_free:
/* Cross-reference with the other btrees. */
STATIC void
xfs_scrub_refcountbt_xref(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno,
xfs_extlen_t len,
xfs_nlink_t refcount)
xchk_refcountbt_xref(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len,
xfs_nlink_t refcount)
{
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return;
xfs_scrub_xref_is_used_space(sc, agbno, len);
xfs_scrub_xref_is_not_inode_chunk(sc, agbno, len);
xfs_scrub_refcountbt_xref_rmap(sc, agbno, len, refcount);
xchk_xref_is_used_space(sc, agbno, len);
xchk_xref_is_not_inode_chunk(sc, agbno, len);
xchk_refcountbt_xref_rmap(sc, agbno, len, refcount);
}
/* Scrub a refcountbt record. */
STATIC int
xfs_scrub_refcountbt_rec(
struct xfs_scrub_btree *bs,
union xfs_btree_rec *rec)
xchk_refcountbt_rec(
struct xchk_btree *bs,
union xfs_btree_rec *rec)
{
struct xfs_mount *mp = bs->cur->bc_mp;
xfs_agblock_t *cow_blocks = bs->private;
xfs_agnumber_t agno = bs->cur->bc_private.a.agno;
xfs_agblock_t bno;
xfs_extlen_t len;
xfs_nlink_t refcount;
bool has_cowflag;
int error = 0;
struct xfs_mount *mp = bs->cur->bc_mp;
xfs_agblock_t *cow_blocks = bs->private;
xfs_agnumber_t agno = bs->cur->bc_private.a.agno;
xfs_agblock_t bno;
xfs_extlen_t len;
xfs_nlink_t refcount;
bool has_cowflag;
int error = 0;
bno = be32_to_cpu(rec->refc.rc_startblock);
len = be32_to_cpu(rec->refc.rc_blockcount);
@ -360,7 +360,7 @@ xfs_scrub_refcountbt_rec(
/* Only CoW records can have refcount == 1. */
has_cowflag = (bno & XFS_REFC_COW_START);
if ((refcount == 1 && !has_cowflag) || (refcount != 1 && has_cowflag))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
if (has_cowflag)
(*cow_blocks) += len;
@ -369,75 +369,75 @@ xfs_scrub_refcountbt_rec(
if (bno + len <= bno ||
!xfs_verify_agbno(mp, agno, bno) ||
!xfs_verify_agbno(mp, agno, bno + len - 1))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
if (refcount == 0)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
xfs_scrub_refcountbt_xref(bs->sc, bno, len, refcount);
xchk_refcountbt_xref(bs->sc, bno, len, refcount);
return error;
}
/* Make sure we have as many refc blocks as the rmap says. */
STATIC void
xfs_scrub_refcount_xref_rmap(
struct xfs_scrub_context *sc,
struct xfs_owner_info *oinfo,
xfs_filblks_t cow_blocks)
xchk_refcount_xref_rmap(
struct xfs_scrub *sc,
struct xfs_owner_info *oinfo,
xfs_filblks_t cow_blocks)
{
xfs_extlen_t refcbt_blocks = 0;
xfs_filblks_t blocks;
int error;
xfs_extlen_t refcbt_blocks = 0;
xfs_filblks_t blocks;
int error;
if (!sc->sa.rmap_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.rmap_cur || xchk_skip_xref(sc->sm))
return;
/* Check that we saw as many refcbt blocks as the rmap knows about. */
error = xfs_btree_count_blocks(sc->sa.refc_cur, &refcbt_blocks);
if (!xfs_scrub_btree_process_error(sc, sc->sa.refc_cur, 0, &error))
if (!xchk_btree_process_error(sc, sc->sa.refc_cur, 0, &error))
return;
error = xfs_scrub_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, oinfo,
error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, oinfo,
&blocks);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.rmap_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur))
return;
if (blocks != refcbt_blocks)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
/* Check that we saw as many cow blocks as the rmap knows about. */
xfs_rmap_ag_owner(oinfo, XFS_RMAP_OWN_COW);
error = xfs_scrub_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, oinfo,
error = xchk_count_rmap_ownedby_ag(sc, sc->sa.rmap_cur, oinfo,
&blocks);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.rmap_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur))
return;
if (blocks != cow_blocks)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
}
/* Scrub the refcount btree for some AG. */
int
xfs_scrub_refcountbt(
struct xfs_scrub_context *sc)
xchk_refcountbt(
struct xfs_scrub *sc)
{
struct xfs_owner_info oinfo;
xfs_agblock_t cow_blocks = 0;
int error;
struct xfs_owner_info oinfo;
xfs_agblock_t cow_blocks = 0;
int error;
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_REFC);
error = xfs_scrub_btree(sc, sc->sa.refc_cur, xfs_scrub_refcountbt_rec,
error = xchk_btree(sc, sc->sa.refc_cur, xchk_refcountbt_rec,
&oinfo, &cow_blocks);
if (error)
return error;
xfs_scrub_refcount_xref_rmap(sc, &oinfo, cow_blocks);
xchk_refcount_xref_rmap(sc, &oinfo, cow_blocks);
return 0;
}
/* xref check that a cow staging extent is marked in the refcountbt. */
void
xfs_scrub_xref_is_cow_staging(
struct xfs_scrub_context *sc,
xchk_xref_is_cow_staging(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
{
@ -446,35 +446,35 @@ xfs_scrub_xref_is_cow_staging(
int has_refcount;
int error;
if (!sc->sa.refc_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.refc_cur || xchk_skip_xref(sc->sm))
return;
/* Find the CoW staging extent. */
error = xfs_refcount_lookup_le(sc->sa.refc_cur,
agbno + XFS_REFC_COW_START, &has_refcount);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.refc_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.refc_cur))
return;
if (!has_refcount) {
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
return;
}
error = xfs_refcount_get_rec(sc->sa.refc_cur, &rc, &has_refcount);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.refc_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.refc_cur))
return;
if (!has_refcount) {
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
return;
}
/* CoW flag must be set, refcount must be 1. */
has_cowflag = (rc.rc_startblock & XFS_REFC_COW_START);
if (!has_cowflag || rc.rc_refcount != 1)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
/* Must be at least as long as what was passed in */
if (rc.rc_blockcount < len)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
}
/*
@ -482,20 +482,20 @@ xfs_scrub_xref_is_cow_staging(
* can have multiple owners.
*/
void
xfs_scrub_xref_is_not_shared(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
xchk_xref_is_not_shared(
struct xfs_scrub *sc,
xfs_agblock_t agbno,
xfs_extlen_t len)
{
bool shared;
int error;
bool shared;
int error;
if (!sc->sa.refc_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.refc_cur || xchk_skip_xref(sc->sm))
return;
error = xfs_refcount_has_record(sc->sa.refc_cur, agbno, len, &shared);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.refc_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.refc_cur))
return;
if (shared)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
}

Просмотреть файл

@ -34,6 +34,7 @@
#include "scrub/common.h"
#include "scrub/trace.h"
#include "scrub/repair.h"
#include "scrub/bitmap.h"
/*
* Attempt to repair some metadata, if the metadata is corrupt and userspace
@ -41,21 +42,21 @@
* and will set *fixed to true if it thinks it repaired anything.
*/
int
xfs_repair_attempt(
struct xfs_inode *ip,
struct xfs_scrub_context *sc,
bool *fixed)
xrep_attempt(
struct xfs_inode *ip,
struct xfs_scrub *sc,
bool *fixed)
{
int error = 0;
int error = 0;
trace_xfs_repair_attempt(ip, sc->sm, error);
trace_xrep_attempt(ip, sc->sm, error);
xfs_scrub_ag_btcur_free(&sc->sa);
xchk_ag_btcur_free(&sc->sa);
/* Repair whatever's broken. */
ASSERT(sc->ops->repair);
error = sc->ops->repair(sc);
trace_xfs_repair_done(ip, sc->sm, error);
trace_xrep_done(ip, sc->sm, error);
switch (error) {
case 0:
/*
@ -93,8 +94,8 @@ xfs_repair_attempt(
* structure to track rate limiting information.
*/
void
xfs_repair_failure(
struct xfs_mount *mp)
xrep_failure(
struct xfs_mount *mp)
{
xfs_alert_ratelimited(mp,
"Corruption not fixed during online repair. Unmount and run xfs_repair.");
@ -105,12 +106,12 @@ xfs_repair_failure(
* given mountpoint.
*/
int
xfs_repair_probe(
struct xfs_scrub_context *sc)
xrep_probe(
struct xfs_scrub *sc)
{
int error = 0;
int error = 0;
if (xfs_scrub_should_terminate(sc, &error))
if (xchk_should_terminate(sc, &error))
return error;
return 0;
@ -121,15 +122,18 @@ xfs_repair_probe(
* the btree cursors.
*/
int
xfs_repair_roll_ag_trans(
struct xfs_scrub_context *sc)
xrep_roll_ag_trans(
struct xfs_scrub *sc)
{
int error;
int error;
/* Keep the AG header buffers locked so we can keep going. */
xfs_trans_bhold(sc->tp, sc->sa.agi_bp);
xfs_trans_bhold(sc->tp, sc->sa.agf_bp);
xfs_trans_bhold(sc->tp, sc->sa.agfl_bp);
if (sc->sa.agi_bp)
xfs_trans_bhold(sc->tp, sc->sa.agi_bp);
if (sc->sa.agf_bp)
xfs_trans_bhold(sc->tp, sc->sa.agf_bp);
if (sc->sa.agfl_bp)
xfs_trans_bhold(sc->tp, sc->sa.agfl_bp);
/* Roll the transaction. */
error = xfs_trans_roll(&sc->tp);
@ -137,9 +141,12 @@ xfs_repair_roll_ag_trans(
goto out_release;
/* Join AG headers to the new transaction. */
xfs_trans_bjoin(sc->tp, sc->sa.agi_bp);
xfs_trans_bjoin(sc->tp, sc->sa.agf_bp);
xfs_trans_bjoin(sc->tp, sc->sa.agfl_bp);
if (sc->sa.agi_bp)
xfs_trans_bjoin(sc->tp, sc->sa.agi_bp);
if (sc->sa.agf_bp)
xfs_trans_bjoin(sc->tp, sc->sa.agf_bp);
if (sc->sa.agfl_bp)
xfs_trans_bjoin(sc->tp, sc->sa.agfl_bp);
return 0;
@ -149,9 +156,12 @@ out_release:
* buffers will be released during teardown on our way out
* of the kernel.
*/
xfs_trans_bhold_release(sc->tp, sc->sa.agi_bp);
xfs_trans_bhold_release(sc->tp, sc->sa.agf_bp);
xfs_trans_bhold_release(sc->tp, sc->sa.agfl_bp);
if (sc->sa.agi_bp)
xfs_trans_bhold_release(sc->tp, sc->sa.agi_bp);
if (sc->sa.agf_bp)
xfs_trans_bhold_release(sc->tp, sc->sa.agf_bp);
if (sc->sa.agfl_bp)
xfs_trans_bhold_release(sc->tp, sc->sa.agfl_bp);
return error;
}
@ -162,10 +172,10 @@ out_release:
* in AG reservations) to construct a whole btree.
*/
bool
xfs_repair_ag_has_space(
struct xfs_perag *pag,
xfs_extlen_t nr_blocks,
enum xfs_ag_resv_type type)
xrep_ag_has_space(
struct xfs_perag *pag,
xfs_extlen_t nr_blocks,
enum xfs_ag_resv_type type)
{
return !xfs_ag_resv_critical(pag, XFS_AG_RESV_RMAPBT) &&
!xfs_ag_resv_critical(pag, XFS_AG_RESV_METADATA) &&
@ -178,8 +188,8 @@ xfs_repair_ag_has_space(
* any type of per-AG btree.
*/
xfs_extlen_t
xfs_repair_calc_ag_resblks(
struct xfs_scrub_context *sc)
xrep_calc_ag_resblks(
struct xfs_scrub *sc)
{
struct xfs_mount *mp = sc->mp;
struct xfs_scrub_metadata *sm = sc->sm;
@ -231,7 +241,7 @@ xfs_repair_calc_ag_resblks(
}
xfs_perag_put(pag);
trace_xfs_repair_calc_ag_resblks(mp, sm->sm_agno, icount, aglen,
trace_xrep_calc_ag_resblks(mp, sm->sm_agno, icount, aglen,
freelen, usedlen);
/*
@ -270,7 +280,7 @@ xfs_repair_calc_ag_resblks(
rmapbt_sz = 0;
}
trace_xfs_repair_calc_ag_resblks_btsize(mp, sm->sm_agno, bnobt_sz,
trace_xrep_calc_ag_resblks_btsize(mp, sm->sm_agno, bnobt_sz,
inobt_sz, rmapbt_sz, refcbt_sz);
return max(max(bnobt_sz, inobt_sz), max(rmapbt_sz, refcbt_sz));
@ -278,15 +288,15 @@ xfs_repair_calc_ag_resblks(
/* Allocate a block in an AG. */
int
xfs_repair_alloc_ag_block(
struct xfs_scrub_context *sc,
struct xfs_owner_info *oinfo,
xfs_fsblock_t *fsbno,
enum xfs_ag_resv_type resv)
xrep_alloc_ag_block(
struct xfs_scrub *sc,
struct xfs_owner_info *oinfo,
xfs_fsblock_t *fsbno,
enum xfs_ag_resv_type resv)
{
struct xfs_alloc_arg args = {0};
xfs_agblock_t bno;
int error;
struct xfs_alloc_arg args = {0};
xfs_agblock_t bno;
int error;
switch (resv) {
case XFS_AG_RESV_AGFL:
@ -329,8 +339,8 @@ xfs_repair_alloc_ag_block(
/* Initialize a new AG btree root block with zero entries. */
int
xfs_repair_init_btblock(
struct xfs_scrub_context *sc,
xrep_init_btblock(
struct xfs_scrub *sc,
xfs_fsblock_t fsb,
struct xfs_buf **bpp,
xfs_btnum_t btnum,
@ -340,7 +350,7 @@ xfs_repair_init_btblock(
struct xfs_mount *mp = sc->mp;
struct xfs_buf *bp;
trace_xfs_repair_init_btblock(mp, XFS_FSB_TO_AGNO(mp, fsb),
trace_xrep_init_btblock(mp, XFS_FSB_TO_AGNO(mp, fsb),
XFS_FSB_TO_AGBNO(mp, fsb), btnum);
ASSERT(XFS_FSB_TO_AGNO(mp, fsb) == sc->sa.agno);
@ -367,222 +377,29 @@ xfs_repair_init_btblock(
*
* However, that leaves the matter of removing all the metadata describing the
* old broken structure. For primary metadata we use the rmap data to collect
* every extent with a matching rmap owner (exlist); we then iterate all other
* every extent with a matching rmap owner (bitmap); we then iterate all other
* metadata structures with the same rmap owner to collect the extents that
* cannot be removed (sublist). We then subtract sublist from exlist to
* cannot be removed (sublist). We then subtract sublist from bitmap to
* derive the blocks that were used by the old btree. These blocks can be
* reaped.
*
* For rmapbt reconstructions we must use different tactics for extent
* collection. First we iterate all primary metadata (this excludes the old
* rmapbt, obviously) to generate new rmap records. The gaps in the rmap
* records are collected as exlist. The bnobt records are collected as
* sublist. As with the other btrees we subtract sublist from exlist, and the
* records are collected as bitmap. The bnobt records are collected as
* sublist. As with the other btrees we subtract sublist from bitmap, and the
* result (since the rmapbt lives in the free space) are the blocks from the
* old rmapbt.
*/
/* Collect a dead btree extent for later disposal. */
int
xfs_repair_collect_btree_extent(
struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *exlist,
xfs_fsblock_t fsbno,
xfs_extlen_t len)
{
struct xfs_repair_extent *rex;
trace_xfs_repair_collect_btree_extent(sc->mp,
XFS_FSB_TO_AGNO(sc->mp, fsbno),
XFS_FSB_TO_AGBNO(sc->mp, fsbno), len);
rex = kmem_alloc(sizeof(struct xfs_repair_extent), KM_MAYFAIL);
if (!rex)
return -ENOMEM;
INIT_LIST_HEAD(&rex->list);
rex->fsbno = fsbno;
rex->len = len;
list_add_tail(&rex->list, &exlist->list);
return 0;
}
/*
* An error happened during the rebuild so the transaction will be cancelled.
* The fs will shut down, and the administrator has to unmount and run repair.
* Therefore, free all the memory associated with the list so we can die.
*/
void
xfs_repair_cancel_btree_extents(
struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *exlist)
{
struct xfs_repair_extent *rex;
struct xfs_repair_extent *n;
for_each_xfs_repair_extent_safe(rex, n, exlist) {
list_del(&rex->list);
kmem_free(rex);
}
}
/* Compare two btree extents. */
static int
xfs_repair_btree_extent_cmp(
void *priv,
struct list_head *a,
struct list_head *b)
{
struct xfs_repair_extent *ap;
struct xfs_repair_extent *bp;
ap = container_of(a, struct xfs_repair_extent, list);
bp = container_of(b, struct xfs_repair_extent, list);
if (ap->fsbno > bp->fsbno)
return 1;
if (ap->fsbno < bp->fsbno)
return -1;
return 0;
}
/*
* Remove all the blocks mentioned in @sublist from the extents in @exlist.
*
* The intent is that callers will iterate the rmapbt for all of its records
* for a given owner to generate @exlist; and iterate all the blocks of the
* metadata structures that are not being rebuilt and have the same rmapbt
* owner to generate @sublist. This routine subtracts all the extents
* mentioned in sublist from all the extents linked in @exlist, which leaves
* @exlist as the list of blocks that are not accounted for, which we assume
* are the dead blocks of the old metadata structure. The blocks mentioned in
* @exlist can be reaped.
*/
#define LEFT_ALIGNED (1 << 0)
#define RIGHT_ALIGNED (1 << 1)
int
xfs_repair_subtract_extents(
struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *exlist,
struct xfs_repair_extent_list *sublist)
{
struct list_head *lp;
struct xfs_repair_extent *ex;
struct xfs_repair_extent *newex;
struct xfs_repair_extent *subex;
xfs_fsblock_t sub_fsb;
xfs_extlen_t sub_len;
int state;
int error = 0;
if (list_empty(&exlist->list) || list_empty(&sublist->list))
return 0;
ASSERT(!list_empty(&sublist->list));
list_sort(NULL, &exlist->list, xfs_repair_btree_extent_cmp);
list_sort(NULL, &sublist->list, xfs_repair_btree_extent_cmp);
/*
* Now that we've sorted both lists, we iterate exlist once, rolling
* forward through sublist and/or exlist as necessary until we find an
* overlap or reach the end of either list. We do not reset lp to the
* head of exlist nor do we reset subex to the head of sublist. The
* list traversal is similar to merge sort, but we're deleting
* instead. In this manner we avoid O(n^2) operations.
*/
subex = list_first_entry(&sublist->list, struct xfs_repair_extent,
list);
lp = exlist->list.next;
while (lp != &exlist->list) {
ex = list_entry(lp, struct xfs_repair_extent, list);
/*
* Advance subex and/or ex until we find a pair that
* intersect or we run out of extents.
*/
while (subex->fsbno + subex->len <= ex->fsbno) {
if (list_is_last(&subex->list, &sublist->list))
goto out;
subex = list_next_entry(subex, list);
}
if (subex->fsbno >= ex->fsbno + ex->len) {
lp = lp->next;
continue;
}
/* trim subex to fit the extent we have */
sub_fsb = subex->fsbno;
sub_len = subex->len;
if (subex->fsbno < ex->fsbno) {
sub_len -= ex->fsbno - subex->fsbno;
sub_fsb = ex->fsbno;
}
if (sub_len > ex->len)
sub_len = ex->len;
state = 0;
if (sub_fsb == ex->fsbno)
state |= LEFT_ALIGNED;
if (sub_fsb + sub_len == ex->fsbno + ex->len)
state |= RIGHT_ALIGNED;
switch (state) {
case LEFT_ALIGNED:
/* Coincides with only the left. */
ex->fsbno += sub_len;
ex->len -= sub_len;
break;
case RIGHT_ALIGNED:
/* Coincides with only the right. */
ex->len -= sub_len;
lp = lp->next;
break;
case LEFT_ALIGNED | RIGHT_ALIGNED:
/* Total overlap, just delete ex. */
lp = lp->next;
list_del(&ex->list);
kmem_free(ex);
break;
case 0:
/*
* Deleting from the middle: add the new right extent
* and then shrink the left extent.
*/
newex = kmem_alloc(sizeof(struct xfs_repair_extent),
KM_MAYFAIL);
if (!newex) {
error = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&newex->list);
newex->fsbno = sub_fsb + sub_len;
newex->len = ex->fsbno + ex->len - newex->fsbno;
list_add(&newex->list, &ex->list);
ex->len = sub_fsb - ex->fsbno;
lp = lp->next;
break;
default:
ASSERT(0);
break;
}
}
out:
return error;
}
#undef LEFT_ALIGNED
#undef RIGHT_ALIGNED
/*
* Disposal of Blocks from Old per-AG Btrees
*
* Now that we've constructed a new btree to replace the damaged one, we want
* to dispose of the blocks that (we think) the old btree was using.
* Previously, we used the rmapbt to collect the extents (exlist) with the
* Previously, we used the rmapbt to collect the extents (bitmap) with the
* rmap owner corresponding to the tree we rebuilt, collected extents for any
* blocks with the same rmap owner that are owned by another data structure
* (sublist), and subtracted sublist from exlist. In theory the extents
* remaining in exlist are the old btree's blocks.
* (sublist), and subtracted sublist from bitmap. In theory the extents
* remaining in bitmap are the old btree's blocks.
*
* Unfortunately, it's possible that the btree was crosslinked with other
* blocks on disk. The rmap data can tell us if there are multiple owners, so
@ -598,7 +415,7 @@ out:
* If there are no rmap records at all, we also free the block. If the btree
* being rebuilt lives in the free space (bnobt/cntbt/rmapbt) then there isn't
* supposed to be a rmap record and everything is ok. For other btrees there
* had to have been an rmap entry for the block to have ended up on @exlist,
* had to have been an rmap entry for the block to have ended up on @bitmap,
* so if it's gone now there's something wrong and the fs will shut down.
*
* Note: If there are multiple rmap records with only the same rmap owner as
@ -611,7 +428,7 @@ out:
* The caller is responsible for locking the AG headers for the entire rebuild
* operation so that nothing else can sneak in and change the AG state while
* we're not looking. We also assume that the caller already invalidated any
* buffers associated with @exlist.
* buffers associated with @bitmap.
*/
/*
@ -619,15 +436,14 @@ out:
* is not intended for use with file data repairs; we have bunmapi for that.
*/
int
xfs_repair_invalidate_blocks(
struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *exlist)
xrep_invalidate_blocks(
struct xfs_scrub *sc,
struct xfs_bitmap *bitmap)
{
struct xfs_repair_extent *rex;
struct xfs_repair_extent *n;
struct xfs_buf *bp;
xfs_fsblock_t fsbno;
xfs_agblock_t i;
struct xfs_bitmap_range *bmr;
struct xfs_bitmap_range *n;
struct xfs_buf *bp;
xfs_fsblock_t fsbno;
/*
* For each block in each extent, see if there's an incore buffer for
@ -637,18 +453,16 @@ xfs_repair_invalidate_blocks(
* because we never own those; and if we can't TRYLOCK the buffer we
* assume it's owned by someone else.
*/
for_each_xfs_repair_extent_safe(rex, n, exlist) {
for (fsbno = rex->fsbno, i = rex->len; i > 0; fsbno++, i--) {
/* Skip AG headers and post-EOFS blocks */
if (!xfs_verify_fsbno(sc->mp, fsbno))
continue;
bp = xfs_buf_incore(sc->mp->m_ddev_targp,
XFS_FSB_TO_DADDR(sc->mp, fsbno),
XFS_FSB_TO_BB(sc->mp, 1), XBF_TRYLOCK);
if (bp) {
xfs_trans_bjoin(sc->tp, bp);
xfs_trans_binval(sc->tp, bp);
}
for_each_xfs_bitmap_block(fsbno, bmr, n, bitmap) {
/* Skip AG headers and post-EOFS blocks */
if (!xfs_verify_fsbno(sc->mp, fsbno))
continue;
bp = xfs_buf_incore(sc->mp->m_ddev_targp,
XFS_FSB_TO_DADDR(sc->mp, fsbno),
XFS_FSB_TO_BB(sc->mp, 1), XBF_TRYLOCK);
if (bp) {
xfs_trans_bjoin(sc->tp, bp);
xfs_trans_binval(sc->tp, bp);
}
}
@ -657,11 +471,11 @@ xfs_repair_invalidate_blocks(
/* Ensure the freelist is the correct size. */
int
xfs_repair_fix_freelist(
struct xfs_scrub_context *sc,
bool can_shrink)
xrep_fix_freelist(
struct xfs_scrub *sc,
bool can_shrink)
{
struct xfs_alloc_arg args = {0};
struct xfs_alloc_arg args = {0};
args.mp = sc->mp;
args.tp = sc->tp;
@ -677,15 +491,15 @@ xfs_repair_fix_freelist(
* Put a block back on the AGFL.
*/
STATIC int
xfs_repair_put_freelist(
struct xfs_scrub_context *sc,
xfs_agblock_t agbno)
xrep_put_freelist(
struct xfs_scrub *sc,
xfs_agblock_t agbno)
{
struct xfs_owner_info oinfo;
int error;
struct xfs_owner_info oinfo;
int error;
/* Make sure there's space on the freelist. */
error = xfs_repair_fix_freelist(sc, true);
error = xrep_fix_freelist(sc, true);
if (error)
return error;
@ -711,20 +525,20 @@ xfs_repair_put_freelist(
return 0;
}
/* Dispose of a single metadata block. */
/* Dispose of a single block. */
STATIC int
xfs_repair_dispose_btree_block(
struct xfs_scrub_context *sc,
xfs_fsblock_t fsbno,
struct xfs_owner_info *oinfo,
enum xfs_ag_resv_type resv)
xrep_reap_block(
struct xfs_scrub *sc,
xfs_fsblock_t fsbno,
struct xfs_owner_info *oinfo,
enum xfs_ag_resv_type resv)
{
struct xfs_btree_cur *cur;
struct xfs_buf *agf_bp = NULL;
xfs_agnumber_t agno;
xfs_agblock_t agbno;
bool has_other_rmap;
int error;
struct xfs_btree_cur *cur;
struct xfs_buf *agf_bp = NULL;
xfs_agnumber_t agno;
xfs_agblock_t agbno;
bool has_other_rmap;
int error;
agno = XFS_FSB_TO_AGNO(sc->mp, fsbno);
agbno = XFS_FSB_TO_AGBNO(sc->mp, fsbno);
@ -747,9 +561,9 @@ xfs_repair_dispose_btree_block(
/* Can we find any other rmappings? */
error = xfs_rmap_has_other_keys(cur, agbno, 1, oinfo, &has_other_rmap);
xfs_btree_del_cursor(cur, error);
if (error)
goto out_cur;
xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR);
goto out_free;
/*
* If there are other rmappings, this block is cross linked and must
@ -767,7 +581,7 @@ xfs_repair_dispose_btree_block(
if (has_other_rmap)
error = xfs_rmap_free(sc->tp, agf_bp, agno, agbno, 1, oinfo);
else if (resv == XFS_AG_RESV_AGFL)
error = xfs_repair_put_freelist(sc, agbno);
error = xrep_put_freelist(sc, agbno);
else
error = xfs_free_extent(sc->tp, fsbno, 1, oinfo, resv);
if (agf_bp != sc->sa.agf_bp)
@ -777,50 +591,43 @@ xfs_repair_dispose_btree_block(
if (sc->ip)
return xfs_trans_roll_inode(&sc->tp, sc->ip);
return xfs_repair_roll_ag_trans(sc);
return xrep_roll_ag_trans(sc);
out_cur:
xfs_btree_del_cursor(cur, XFS_BTREE_ERROR);
out_free:
if (agf_bp != sc->sa.agf_bp)
xfs_trans_brelse(sc->tp, agf_bp);
return error;
}
/* Dispose of btree blocks from an old per-AG btree. */
/* Dispose of every block of every extent in the bitmap. */
int
xfs_repair_reap_btree_extents(
struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *exlist,
struct xfs_owner_info *oinfo,
enum xfs_ag_resv_type type)
xrep_reap_extents(
struct xfs_scrub *sc,
struct xfs_bitmap *bitmap,
struct xfs_owner_info *oinfo,
enum xfs_ag_resv_type type)
{
struct xfs_repair_extent *rex;
struct xfs_repair_extent *n;
int error = 0;
struct xfs_bitmap_range *bmr;
struct xfs_bitmap_range *n;
xfs_fsblock_t fsbno;
int error = 0;
ASSERT(xfs_sb_version_hasrmapbt(&sc->mp->m_sb));
/* Dispose of every block from the old btree. */
for_each_xfs_repair_extent_safe(rex, n, exlist) {
for_each_xfs_bitmap_block(fsbno, bmr, n, bitmap) {
ASSERT(sc->ip != NULL ||
XFS_FSB_TO_AGNO(sc->mp, rex->fsbno) == sc->sa.agno);
XFS_FSB_TO_AGNO(sc->mp, fsbno) == sc->sa.agno);
trace_xrep_dispose_btree_extent(sc->mp,
XFS_FSB_TO_AGNO(sc->mp, fsbno),
XFS_FSB_TO_AGBNO(sc->mp, fsbno), 1);
trace_xfs_repair_dispose_btree_extent(sc->mp,
XFS_FSB_TO_AGNO(sc->mp, rex->fsbno),
XFS_FSB_TO_AGBNO(sc->mp, rex->fsbno), rex->len);
for (; rex->len > 0; rex->len--, rex->fsbno++) {
error = xfs_repair_dispose_btree_block(sc, rex->fsbno,
oinfo, type);
if (error)
goto out;
}
list_del(&rex->list);
kmem_free(rex);
error = xrep_reap_block(sc, fsbno, oinfo, type);
if (error)
goto out;
}
out:
xfs_repair_cancel_btree_extents(sc, exlist);
xfs_bitmap_destroy(bitmap);
return error;
}
@ -832,12 +639,12 @@ out:
* btree roots. This is not guaranteed to work if the AG is heavily damaged
* or the rmap data are corrupt.
*
* Callers of xfs_repair_find_ag_btree_roots must lock the AGF and AGFL
* Callers of xrep_find_ag_btree_roots must lock the AGF and AGFL
* buffers if the AGF is being rebuilt; or the AGF and AGI buffers if the
* AGI is being rebuilt. It must maintain these locks until it's safe for
* other threads to change the btrees' shapes. The caller provides
* information about the btrees to look for by passing in an array of
* xfs_repair_find_ag_btree with the (rmap owner, buf_ops, magic) fields set.
* xrep_find_ag_btree with the (rmap owner, buf_ops, magic) fields set.
* The (root, height) fields will be set on return if anything is found. The
* last element of the array should have a NULL buf_ops to mark the end of the
* array.
@ -851,30 +658,30 @@ out:
* should be the roots.
*/
struct xfs_repair_findroot {
struct xfs_scrub_context *sc;
struct xrep_findroot {
struct xfs_scrub *sc;
struct xfs_buf *agfl_bp;
struct xfs_agf *agf;
struct xfs_repair_find_ag_btree *btree_info;
struct xrep_find_ag_btree *btree_info;
};
/* See if our block is in the AGFL. */
STATIC int
xfs_repair_findroot_agfl_walk(
struct xfs_mount *mp,
xfs_agblock_t bno,
void *priv)
xrep_findroot_agfl_walk(
struct xfs_mount *mp,
xfs_agblock_t bno,
void *priv)
{
xfs_agblock_t *agbno = priv;
xfs_agblock_t *agbno = priv;
return (*agbno == bno) ? XFS_BTREE_QUERY_RANGE_ABORT : 0;
}
/* Does this block match the btree information passed in? */
STATIC int
xfs_repair_findroot_block(
struct xfs_repair_findroot *ri,
struct xfs_repair_find_ag_btree *fab,
xrep_findroot_block(
struct xrep_findroot *ri,
struct xrep_find_ag_btree *fab,
uint64_t owner,
xfs_agblock_t agbno,
bool *found_it)
@ -895,7 +702,7 @@ xfs_repair_findroot_block(
*/
if (owner == XFS_RMAP_OWN_AG) {
error = xfs_agfl_walk(mp, ri->agf, ri->agfl_bp,
xfs_repair_findroot_agfl_walk, &agbno);
xrep_findroot_agfl_walk, &agbno);
if (error == XFS_BTREE_QUERY_RANGE_ABORT)
return 0;
if (error)
@ -933,7 +740,7 @@ xfs_repair_findroot_block(
fab->height = xfs_btree_get_level(btblock) + 1;
*found_it = true;
trace_xfs_repair_findroot_block(mp, ri->sc->sa.agno, agbno,
trace_xrep_findroot_block(mp, ri->sc->sa.agno, agbno,
be32_to_cpu(btblock->bb_magic), fab->height - 1);
out:
xfs_trans_brelse(ri->sc->tp, bp);
@ -945,13 +752,13 @@ out:
* looking for?
*/
STATIC int
xfs_repair_findroot_rmap(
xrep_findroot_rmap(
struct xfs_btree_cur *cur,
struct xfs_rmap_irec *rec,
void *priv)
{
struct xfs_repair_findroot *ri = priv;
struct xfs_repair_find_ag_btree *fab;
struct xrep_findroot *ri = priv;
struct xrep_find_ag_btree *fab;
xfs_agblock_t b;
bool found_it;
int error = 0;
@ -966,7 +773,7 @@ xfs_repair_findroot_rmap(
for (fab = ri->btree_info; fab->buf_ops; fab++) {
if (rec->rm_owner != fab->rmap_owner)
continue;
error = xfs_repair_findroot_block(ri, fab,
error = xrep_findroot_block(ri, fab,
rec->rm_owner, rec->rm_startblock + b,
&found_it);
if (error)
@ -981,15 +788,15 @@ xfs_repair_findroot_rmap(
/* Find the roots of the per-AG btrees described in btree_info. */
int
xfs_repair_find_ag_btree_roots(
struct xfs_scrub_context *sc,
xrep_find_ag_btree_roots(
struct xfs_scrub *sc,
struct xfs_buf *agf_bp,
struct xfs_repair_find_ag_btree *btree_info,
struct xrep_find_ag_btree *btree_info,
struct xfs_buf *agfl_bp)
{
struct xfs_mount *mp = sc->mp;
struct xfs_repair_findroot ri;
struct xfs_repair_find_ag_btree *fab;
struct xrep_findroot ri;
struct xrep_find_ag_btree *fab;
struct xfs_btree_cur *cur;
int error;
@ -1008,19 +815,19 @@ xfs_repair_find_ag_btree_roots(
}
cur = xfs_rmapbt_init_cursor(mp, sc->tp, agf_bp, sc->sa.agno);
error = xfs_rmap_query_all(cur, xfs_repair_findroot_rmap, &ri);
xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
error = xfs_rmap_query_all(cur, xrep_findroot_rmap, &ri);
xfs_btree_del_cursor(cur, error);
return error;
}
/* Force a quotacheck the next time we mount. */
void
xfs_repair_force_quotacheck(
struct xfs_scrub_context *sc,
uint dqtype)
xrep_force_quotacheck(
struct xfs_scrub *sc,
uint dqtype)
{
uint flag;
uint flag;
flag = xfs_quota_chkd_flag(dqtype);
if (!(flag & sc->mp->m_qflags))
@ -1044,10 +851,10 @@ xfs_repair_force_quotacheck(
* repair corruptions in the quota metadata.
*/
int
xfs_repair_ino_dqattach(
struct xfs_scrub_context *sc)
xrep_ino_dqattach(
struct xfs_scrub *sc)
{
int error;
int error;
error = xfs_qm_dqattach_locked(sc->ip, false);
switch (error) {
@ -1058,11 +865,11 @@ xfs_repair_ino_dqattach(
"inode %llu repair encountered quota error %d, quotacheck forced.",
(unsigned long long)sc->ip->i_ino, error);
if (XFS_IS_UQUOTA_ON(sc->mp) && !sc->ip->i_udquot)
xfs_repair_force_quotacheck(sc, XFS_DQ_USER);
xrep_force_quotacheck(sc, XFS_DQ_USER);
if (XFS_IS_GQUOTA_ON(sc->mp) && !sc->ip->i_gdquot)
xfs_repair_force_quotacheck(sc, XFS_DQ_GROUP);
xrep_force_quotacheck(sc, XFS_DQ_GROUP);
if (XFS_IS_PQUOTA_ON(sc->mp) && !sc->ip->i_pdquot)
xfs_repair_force_quotacheck(sc, XFS_DQ_PROJ);
xrep_force_quotacheck(sc, XFS_DQ_PROJ);
/* fall through */
case -ESRCH:
error = 0;

Просмотреть файл

@ -6,7 +6,7 @@
#ifndef __XFS_SCRUB_REPAIR_H__
#define __XFS_SCRUB_REPAIR_H__
static inline int xfs_repair_notsupported(struct xfs_scrub_context *sc)
static inline int xrep_notsupported(struct xfs_scrub *sc)
{
return -EOPNOTSUPP;
}
@ -15,55 +15,26 @@ static inline int xfs_repair_notsupported(struct xfs_scrub_context *sc)
/* Repair helpers */
int xfs_repair_attempt(struct xfs_inode *ip, struct xfs_scrub_context *sc,
bool *fixed);
void xfs_repair_failure(struct xfs_mount *mp);
int xfs_repair_roll_ag_trans(struct xfs_scrub_context *sc);
bool xfs_repair_ag_has_space(struct xfs_perag *pag, xfs_extlen_t nr_blocks,
int xrep_attempt(struct xfs_inode *ip, struct xfs_scrub *sc, bool *fixed);
void xrep_failure(struct xfs_mount *mp);
int xrep_roll_ag_trans(struct xfs_scrub *sc);
bool xrep_ag_has_space(struct xfs_perag *pag, xfs_extlen_t nr_blocks,
enum xfs_ag_resv_type type);
xfs_extlen_t xfs_repair_calc_ag_resblks(struct xfs_scrub_context *sc);
int xfs_repair_alloc_ag_block(struct xfs_scrub_context *sc,
struct xfs_owner_info *oinfo, xfs_fsblock_t *fsbno,
enum xfs_ag_resv_type resv);
int xfs_repair_init_btblock(struct xfs_scrub_context *sc, xfs_fsblock_t fsb,
xfs_extlen_t xrep_calc_ag_resblks(struct xfs_scrub *sc);
int xrep_alloc_ag_block(struct xfs_scrub *sc, struct xfs_owner_info *oinfo,
xfs_fsblock_t *fsbno, enum xfs_ag_resv_type resv);
int xrep_init_btblock(struct xfs_scrub *sc, xfs_fsblock_t fsb,
struct xfs_buf **bpp, xfs_btnum_t btnum,
const struct xfs_buf_ops *ops);
struct xfs_repair_extent {
struct list_head list;
xfs_fsblock_t fsbno;
xfs_extlen_t len;
};
struct xfs_bitmap;
struct xfs_repair_extent_list {
struct list_head list;
};
static inline void
xfs_repair_init_extent_list(
struct xfs_repair_extent_list *exlist)
{
INIT_LIST_HEAD(&exlist->list);
}
#define for_each_xfs_repair_extent_safe(rbe, n, exlist) \
list_for_each_entry_safe((rbe), (n), &(exlist)->list, list)
int xfs_repair_collect_btree_extent(struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *btlist, xfs_fsblock_t fsbno,
xfs_extlen_t len);
void xfs_repair_cancel_btree_extents(struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *btlist);
int xfs_repair_subtract_extents(struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *exlist,
struct xfs_repair_extent_list *sublist);
int xfs_repair_fix_freelist(struct xfs_scrub_context *sc, bool can_shrink);
int xfs_repair_invalidate_blocks(struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *btlist);
int xfs_repair_reap_btree_extents(struct xfs_scrub_context *sc,
struct xfs_repair_extent_list *exlist,
int xrep_fix_freelist(struct xfs_scrub *sc, bool can_shrink);
int xrep_invalidate_blocks(struct xfs_scrub *sc, struct xfs_bitmap *btlist);
int xrep_reap_extents(struct xfs_scrub *sc, struct xfs_bitmap *exlist,
struct xfs_owner_info *oinfo, enum xfs_ag_resv_type type);
struct xfs_repair_find_ag_btree {
struct xrep_find_ag_btree {
/* in: rmap owner of the btree we're looking for */
uint64_t rmap_owner;
@ -78,40 +49,44 @@ struct xfs_repair_find_ag_btree {
unsigned int height;
};
int xfs_repair_find_ag_btree_roots(struct xfs_scrub_context *sc,
struct xfs_buf *agf_bp,
struct xfs_repair_find_ag_btree *btree_info,
struct xfs_buf *agfl_bp);
void xfs_repair_force_quotacheck(struct xfs_scrub_context *sc, uint dqtype);
int xfs_repair_ino_dqattach(struct xfs_scrub_context *sc);
int xrep_find_ag_btree_roots(struct xfs_scrub *sc, struct xfs_buf *agf_bp,
struct xrep_find_ag_btree *btree_info, struct xfs_buf *agfl_bp);
void xrep_force_quotacheck(struct xfs_scrub *sc, uint dqtype);
int xrep_ino_dqattach(struct xfs_scrub *sc);
/* Metadata repairers */
int xfs_repair_probe(struct xfs_scrub_context *sc);
int xfs_repair_superblock(struct xfs_scrub_context *sc);
int xrep_probe(struct xfs_scrub *sc);
int xrep_superblock(struct xfs_scrub *sc);
int xrep_agf(struct xfs_scrub *sc);
int xrep_agfl(struct xfs_scrub *sc);
int xrep_agi(struct xfs_scrub *sc);
#else
static inline int xfs_repair_attempt(
struct xfs_inode *ip,
struct xfs_scrub_context *sc,
bool *fixed)
static inline int xrep_attempt(
struct xfs_inode *ip,
struct xfs_scrub *sc,
bool *fixed)
{
return -EOPNOTSUPP;
}
static inline void xfs_repair_failure(struct xfs_mount *mp) {}
static inline void xrep_failure(struct xfs_mount *mp) {}
static inline xfs_extlen_t
xfs_repair_calc_ag_resblks(
struct xfs_scrub_context *sc)
xrep_calc_ag_resblks(
struct xfs_scrub *sc)
{
ASSERT(!(sc->sm->sm_flags & XFS_SCRUB_IFLAG_REPAIR));
return 0;
}
#define xfs_repair_probe xfs_repair_notsupported
#define xfs_repair_superblock xfs_repair_notsupported
#define xrep_probe xrep_notsupported
#define xrep_superblock xrep_notsupported
#define xrep_agf xrep_notsupported
#define xrep_agfl xrep_notsupported
#define xrep_agi xrep_notsupported
#endif /* CONFIG_XFS_ONLINE_REPAIR */

Просмотреть файл

@ -29,30 +29,30 @@
* Set us up to scrub reverse mapping btrees.
*/
int
xfs_scrub_setup_ag_rmapbt(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_ag_rmapbt(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
return xfs_scrub_setup_ag_btree(sc, ip, false);
return xchk_setup_ag_btree(sc, ip, false);
}
/* Reverse-mapping scrubber. */
/* Cross-reference a rmap against the refcount btree. */
STATIC void
xfs_scrub_rmapbt_xref_refc(
struct xfs_scrub_context *sc,
struct xfs_rmap_irec *irec)
xchk_rmapbt_xref_refc(
struct xfs_scrub *sc,
struct xfs_rmap_irec *irec)
{
xfs_agblock_t fbno;
xfs_extlen_t flen;
bool non_inode;
bool is_bmbt;
bool is_attr;
bool is_unwritten;
int error;
xfs_agblock_t fbno;
xfs_extlen_t flen;
bool non_inode;
bool is_bmbt;
bool is_attr;
bool is_unwritten;
int error;
if (!sc->sa.refc_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.refc_cur || xchk_skip_xref(sc->sm))
return;
non_inode = XFS_RMAP_NON_INODE_OWNER(irec->rm_owner);
@ -63,58 +63,58 @@ xfs_scrub_rmapbt_xref_refc(
/* If this is shared, must be a data fork extent. */
error = xfs_refcount_find_shared(sc->sa.refc_cur, irec->rm_startblock,
irec->rm_blockcount, &fbno, &flen, false);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.refc_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.refc_cur))
return;
if (flen != 0 && (non_inode || is_attr || is_bmbt || is_unwritten))
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.refc_cur, 0);
}
/* Cross-reference with the other btrees. */
STATIC void
xfs_scrub_rmapbt_xref(
struct xfs_scrub_context *sc,
struct xfs_rmap_irec *irec)
xchk_rmapbt_xref(
struct xfs_scrub *sc,
struct xfs_rmap_irec *irec)
{
xfs_agblock_t agbno = irec->rm_startblock;
xfs_extlen_t len = irec->rm_blockcount;
xfs_agblock_t agbno = irec->rm_startblock;
xfs_extlen_t len = irec->rm_blockcount;
if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
return;
xfs_scrub_xref_is_used_space(sc, agbno, len);
xchk_xref_is_used_space(sc, agbno, len);
if (irec->rm_owner == XFS_RMAP_OWN_INODES)
xfs_scrub_xref_is_inode_chunk(sc, agbno, len);
xchk_xref_is_inode_chunk(sc, agbno, len);
else
xfs_scrub_xref_is_not_inode_chunk(sc, agbno, len);
xchk_xref_is_not_inode_chunk(sc, agbno, len);
if (irec->rm_owner == XFS_RMAP_OWN_COW)
xfs_scrub_xref_is_cow_staging(sc, irec->rm_startblock,
xchk_xref_is_cow_staging(sc, irec->rm_startblock,
irec->rm_blockcount);
else
xfs_scrub_rmapbt_xref_refc(sc, irec);
xchk_rmapbt_xref_refc(sc, irec);
}
/* Scrub an rmapbt record. */
STATIC int
xfs_scrub_rmapbt_rec(
struct xfs_scrub_btree *bs,
union xfs_btree_rec *rec)
xchk_rmapbt_rec(
struct xchk_btree *bs,
union xfs_btree_rec *rec)
{
struct xfs_mount *mp = bs->cur->bc_mp;
struct xfs_rmap_irec irec;
xfs_agnumber_t agno = bs->cur->bc_private.a.agno;
bool non_inode;
bool is_unwritten;
bool is_bmbt;
bool is_attr;
int error;
struct xfs_mount *mp = bs->cur->bc_mp;
struct xfs_rmap_irec irec;
xfs_agnumber_t agno = bs->cur->bc_private.a.agno;
bool non_inode;
bool is_unwritten;
bool is_bmbt;
bool is_attr;
int error;
error = xfs_rmap_btrec_to_irec(rec, &irec);
if (!xfs_scrub_btree_process_error(bs->sc, bs->cur, 0, &error))
if (!xchk_btree_process_error(bs->sc, bs->cur, 0, &error))
goto out;
/* Check extent. */
if (irec.rm_startblock + irec.rm_blockcount <= irec.rm_startblock)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
if (irec.rm_owner == XFS_RMAP_OWN_FS) {
/*
@ -124,7 +124,7 @@ xfs_scrub_rmapbt_rec(
*/
if (irec.rm_startblock != 0 ||
irec.rm_blockcount != XFS_AGFL_BLOCK(mp) + 1)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
} else {
/*
* Otherwise we must point somewhere past the static metadata
@ -133,7 +133,7 @@ xfs_scrub_rmapbt_rec(
if (!xfs_verify_agbno(mp, agno, irec.rm_startblock) ||
!xfs_verify_agbno(mp, agno, irec.rm_startblock +
irec.rm_blockcount - 1))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
}
/* Check flags. */
@ -143,105 +143,105 @@ xfs_scrub_rmapbt_rec(
is_unwritten = irec.rm_flags & XFS_RMAP_UNWRITTEN;
if (is_bmbt && irec.rm_offset != 0)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
if (non_inode && irec.rm_offset != 0)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
if (is_unwritten && (is_bmbt || non_inode || is_attr))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
if (non_inode && (is_bmbt || is_unwritten || is_attr))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
if (!non_inode) {
if (!xfs_verify_ino(mp, irec.rm_owner))
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
} else {
/* Non-inode owner within the magic values? */
if (irec.rm_owner <= XFS_RMAP_OWN_MIN ||
irec.rm_owner > XFS_RMAP_OWN_FS)
xfs_scrub_btree_set_corrupt(bs->sc, bs->cur, 0);
xchk_btree_set_corrupt(bs->sc, bs->cur, 0);
}
xfs_scrub_rmapbt_xref(bs->sc, &irec);
xchk_rmapbt_xref(bs->sc, &irec);
out:
return error;
}
/* Scrub the rmap btree for some AG. */
int
xfs_scrub_rmapbt(
struct xfs_scrub_context *sc)
xchk_rmapbt(
struct xfs_scrub *sc)
{
struct xfs_owner_info oinfo;
struct xfs_owner_info oinfo;
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_AG);
return xfs_scrub_btree(sc, sc->sa.rmap_cur, xfs_scrub_rmapbt_rec,
return xchk_btree(sc, sc->sa.rmap_cur, xchk_rmapbt_rec,
&oinfo, NULL);
}
/* xref check that the extent is owned by a given owner */
static inline void
xfs_scrub_xref_check_owner(
struct xfs_scrub_context *sc,
xfs_agblock_t bno,
xfs_extlen_t len,
struct xfs_owner_info *oinfo,
bool should_have_rmap)
xchk_xref_check_owner(
struct xfs_scrub *sc,
xfs_agblock_t bno,
xfs_extlen_t len,
struct xfs_owner_info *oinfo,
bool should_have_rmap)
{
bool has_rmap;
int error;
bool has_rmap;
int error;
if (!sc->sa.rmap_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.rmap_cur || xchk_skip_xref(sc->sm))
return;
error = xfs_rmap_record_exists(sc->sa.rmap_cur, bno, len, oinfo,
&has_rmap);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.rmap_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur))
return;
if (has_rmap != should_have_rmap)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
}
/* xref check that the extent is owned by a given owner */
void
xfs_scrub_xref_is_owned_by(
struct xfs_scrub_context *sc,
xfs_agblock_t bno,
xfs_extlen_t len,
struct xfs_owner_info *oinfo)
xchk_xref_is_owned_by(
struct xfs_scrub *sc,
xfs_agblock_t bno,
xfs_extlen_t len,
struct xfs_owner_info *oinfo)
{
xfs_scrub_xref_check_owner(sc, bno, len, oinfo, true);
xchk_xref_check_owner(sc, bno, len, oinfo, true);
}
/* xref check that the extent is not owned by a given owner */
void
xfs_scrub_xref_is_not_owned_by(
struct xfs_scrub_context *sc,
xfs_agblock_t bno,
xfs_extlen_t len,
struct xfs_owner_info *oinfo)
xchk_xref_is_not_owned_by(
struct xfs_scrub *sc,
xfs_agblock_t bno,
xfs_extlen_t len,
struct xfs_owner_info *oinfo)
{
xfs_scrub_xref_check_owner(sc, bno, len, oinfo, false);
xchk_xref_check_owner(sc, bno, len, oinfo, false);
}
/* xref check that the extent has no reverse mapping at all */
void
xfs_scrub_xref_has_no_owner(
struct xfs_scrub_context *sc,
xfs_agblock_t bno,
xfs_extlen_t len)
xchk_xref_has_no_owner(
struct xfs_scrub *sc,
xfs_agblock_t bno,
xfs_extlen_t len)
{
bool has_rmap;
int error;
bool has_rmap;
int error;
if (!sc->sa.rmap_cur || xfs_scrub_skip_xref(sc->sm))
if (!sc->sa.rmap_cur || xchk_skip_xref(sc->sm))
return;
error = xfs_rmap_has_record(sc->sa.rmap_cur, bno, len, &has_rmap);
if (!xfs_scrub_should_check_xref(sc, &error, &sc->sa.rmap_cur))
if (!xchk_should_check_xref(sc, &error, &sc->sa.rmap_cur))
return;
if (has_rmap)
xfs_scrub_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
xchk_btree_xref_set_corrupt(sc, sc->sa.rmap_cur, 0);
}

Просмотреть файл

@ -25,13 +25,13 @@
/* Set us up with the realtime metadata locked. */
int
xfs_scrub_setup_rt(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_rt(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
int error;
int error;
error = xfs_scrub_setup_fs(sc, ip);
error = xchk_setup_fs(sc, ip);
if (error)
return error;
@ -46,14 +46,14 @@ xfs_scrub_setup_rt(
/* Scrub a free extent record from the realtime bitmap. */
STATIC int
xfs_scrub_rtbitmap_rec(
struct xfs_trans *tp,
struct xfs_rtalloc_rec *rec,
void *priv)
xchk_rtbitmap_rec(
struct xfs_trans *tp,
struct xfs_rtalloc_rec *rec,
void *priv)
{
struct xfs_scrub_context *sc = priv;
xfs_rtblock_t startblock;
xfs_rtblock_t blockcount;
struct xfs_scrub *sc = priv;
xfs_rtblock_t startblock;
xfs_rtblock_t blockcount;
startblock = rec->ar_startext * tp->t_mountp->m_sb.sb_rextsize;
blockcount = rec->ar_extcount * tp->t_mountp->m_sb.sb_rextsize;
@ -61,24 +61,24 @@ xfs_scrub_rtbitmap_rec(
if (startblock + blockcount <= startblock ||
!xfs_verify_rtbno(sc->mp, startblock) ||
!xfs_verify_rtbno(sc->mp, startblock + blockcount - 1))
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
return 0;
}
/* Scrub the realtime bitmap. */
int
xfs_scrub_rtbitmap(
struct xfs_scrub_context *sc)
xchk_rtbitmap(
struct xfs_scrub *sc)
{
int error;
int error;
/* Invoke the fork scrubber. */
error = xfs_scrub_metadata_inode_forks(sc);
error = xchk_metadata_inode_forks(sc);
if (error || (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
return error;
error = xfs_rtalloc_query_all(sc->tp, xfs_scrub_rtbitmap_rec, sc);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, 0, &error))
error = xfs_rtalloc_query_all(sc->tp, xchk_rtbitmap_rec, sc);
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, 0, &error))
goto out;
out:
@ -87,13 +87,13 @@ out:
/* Scrub the realtime summary. */
int
xfs_scrub_rtsummary(
struct xfs_scrub_context *sc)
xchk_rtsummary(
struct xfs_scrub *sc)
{
struct xfs_inode *rsumip = sc->mp->m_rsumip;
struct xfs_inode *old_ip = sc->ip;
uint old_ilock_flags = sc->ilock_flags;
int error = 0;
struct xfs_inode *rsumip = sc->mp->m_rsumip;
struct xfs_inode *old_ip = sc->ip;
uint old_ilock_flags = sc->ilock_flags;
int error = 0;
/*
* We ILOCK'd the rt bitmap ip in the setup routine, now lock the
@ -107,12 +107,12 @@ xfs_scrub_rtsummary(
xfs_ilock(sc->ip, sc->ilock_flags);
/* Invoke the fork scrubber. */
error = xfs_scrub_metadata_inode_forks(sc);
error = xchk_metadata_inode_forks(sc);
if (error || (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
goto out;
/* XXX: implement this some day */
xfs_scrub_set_incomplete(sc);
xchk_set_incomplete(sc);
out:
/* Switch back to the rtbitmap inode and lock flags. */
xfs_iunlock(sc->ip, sc->ilock_flags);
@ -124,18 +124,18 @@ out:
/* xref check that the extent is not free in the rtbitmap */
void
xfs_scrub_xref_is_used_rt_space(
struct xfs_scrub_context *sc,
xfs_rtblock_t fsbno,
xfs_extlen_t len)
xchk_xref_is_used_rt_space(
struct xfs_scrub *sc,
xfs_rtblock_t fsbno,
xfs_extlen_t len)
{
xfs_rtblock_t startext;
xfs_rtblock_t endext;
xfs_rtblock_t extcount;
bool is_free;
int error;
xfs_rtblock_t startext;
xfs_rtblock_t endext;
xfs_rtblock_t extcount;
bool is_free;
int error;
if (xfs_scrub_skip_xref(sc->sm))
if (xchk_skip_xref(sc->sm))
return;
startext = fsbno;
@ -147,10 +147,10 @@ xfs_scrub_xref_is_used_rt_space(
xfs_ilock(sc->mp->m_rbmip, XFS_ILOCK_SHARED | XFS_ILOCK_RTBITMAP);
error = xfs_rtalloc_extent_is_free(sc->mp, sc->tp, startext, extcount,
&is_free);
if (!xfs_scrub_should_check_xref(sc, &error, NULL))
if (!xchk_should_check_xref(sc, &error, NULL))
goto out_unlock;
if (is_free)
xfs_scrub_ino_xref_set_corrupt(sc, sc->mp->m_rbmip->i_ino);
xchk_ino_xref_set_corrupt(sc, sc->mp->m_rbmip->i_ino);
out_unlock:
xfs_iunlock(sc->mp->m_rbmip, XFS_ILOCK_SHARED | XFS_ILOCK_RTBITMAP);
}

Просмотреть файл

@ -131,6 +131,12 @@
* optimize the structure so that the rebuild knows what to do. The
* second check evaluates the completeness of the repair; that is what
* is reported to userspace.
*
* A quick note on symbol prefixes:
* - "xfs_" are general XFS symbols.
* - "xchk_" are symbols related to metadata checking.
* - "xrep_" are symbols related to metadata repair.
* - "xfs_scrub_" are symbols that tie online fsck to the rest of XFS.
*/
/*
@ -144,12 +150,12 @@
* supported by the running kernel.
*/
static int
xfs_scrub_probe(
struct xfs_scrub_context *sc)
xchk_probe(
struct xfs_scrub *sc)
{
int error = 0;
int error = 0;
if (xfs_scrub_should_terminate(sc, &error))
if (xchk_should_terminate(sc, &error))
return error;
return 0;
@ -159,12 +165,12 @@ xfs_scrub_probe(
/* Free all the resources and finish the transactions. */
STATIC int
xfs_scrub_teardown(
struct xfs_scrub_context *sc,
struct xfs_inode *ip_in,
int error)
xchk_teardown(
struct xfs_scrub *sc,
struct xfs_inode *ip_in,
int error)
{
xfs_scrub_ag_free(sc, &sc->sa);
xchk_ag_free(sc, &sc->sa);
if (sc->tp) {
if (error == 0 && (sc->sm->sm_flags & XFS_SCRUB_IFLAG_REPAIR))
error = xfs_trans_commit(sc->tp);
@ -177,7 +183,7 @@ xfs_scrub_teardown(
xfs_iunlock(sc->ip, sc->ilock_flags);
if (sc->ip != ip_in &&
!xfs_internal_inum(sc->mp, sc->ip->i_ino))
iput(VFS_I(sc->ip));
xfs_irele(sc->ip);
sc->ip = NULL;
}
if (sc->has_quotaofflock)
@ -191,165 +197,165 @@ xfs_scrub_teardown(
/* Scrubbing dispatch. */
static const struct xfs_scrub_meta_ops meta_scrub_ops[] = {
static const struct xchk_meta_ops meta_scrub_ops[] = {
[XFS_SCRUB_TYPE_PROBE] = { /* ioctl presence test */
.type = ST_NONE,
.setup = xfs_scrub_setup_fs,
.scrub = xfs_scrub_probe,
.repair = xfs_repair_probe,
.setup = xchk_setup_fs,
.scrub = xchk_probe,
.repair = xrep_probe,
},
[XFS_SCRUB_TYPE_SB] = { /* superblock */
.type = ST_PERAG,
.setup = xfs_scrub_setup_fs,
.scrub = xfs_scrub_superblock,
.repair = xfs_repair_superblock,
.setup = xchk_setup_fs,
.scrub = xchk_superblock,
.repair = xrep_superblock,
},
[XFS_SCRUB_TYPE_AGF] = { /* agf */
.type = ST_PERAG,
.setup = xfs_scrub_setup_fs,
.scrub = xfs_scrub_agf,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_fs,
.scrub = xchk_agf,
.repair = xrep_agf,
},
[XFS_SCRUB_TYPE_AGFL]= { /* agfl */
.type = ST_PERAG,
.setup = xfs_scrub_setup_fs,
.scrub = xfs_scrub_agfl,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_fs,
.scrub = xchk_agfl,
.repair = xrep_agfl,
},
[XFS_SCRUB_TYPE_AGI] = { /* agi */
.type = ST_PERAG,
.setup = xfs_scrub_setup_fs,
.scrub = xfs_scrub_agi,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_fs,
.scrub = xchk_agi,
.repair = xrep_agi,
},
[XFS_SCRUB_TYPE_BNOBT] = { /* bnobt */
.type = ST_PERAG,
.setup = xfs_scrub_setup_ag_allocbt,
.scrub = xfs_scrub_bnobt,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_ag_allocbt,
.scrub = xchk_bnobt,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_CNTBT] = { /* cntbt */
.type = ST_PERAG,
.setup = xfs_scrub_setup_ag_allocbt,
.scrub = xfs_scrub_cntbt,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_ag_allocbt,
.scrub = xchk_cntbt,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_INOBT] = { /* inobt */
.type = ST_PERAG,
.setup = xfs_scrub_setup_ag_iallocbt,
.scrub = xfs_scrub_inobt,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_ag_iallocbt,
.scrub = xchk_inobt,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_FINOBT] = { /* finobt */
.type = ST_PERAG,
.setup = xfs_scrub_setup_ag_iallocbt,
.scrub = xfs_scrub_finobt,
.setup = xchk_setup_ag_iallocbt,
.scrub = xchk_finobt,
.has = xfs_sb_version_hasfinobt,
.repair = xfs_repair_notsupported,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_RMAPBT] = { /* rmapbt */
.type = ST_PERAG,
.setup = xfs_scrub_setup_ag_rmapbt,
.scrub = xfs_scrub_rmapbt,
.setup = xchk_setup_ag_rmapbt,
.scrub = xchk_rmapbt,
.has = xfs_sb_version_hasrmapbt,
.repair = xfs_repair_notsupported,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_REFCNTBT] = { /* refcountbt */
.type = ST_PERAG,
.setup = xfs_scrub_setup_ag_refcountbt,
.scrub = xfs_scrub_refcountbt,
.setup = xchk_setup_ag_refcountbt,
.scrub = xchk_refcountbt,
.has = xfs_sb_version_hasreflink,
.repair = xfs_repair_notsupported,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_INODE] = { /* inode record */
.type = ST_INODE,
.setup = xfs_scrub_setup_inode,
.scrub = xfs_scrub_inode,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_inode,
.scrub = xchk_inode,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_BMBTD] = { /* inode data fork */
.type = ST_INODE,
.setup = xfs_scrub_setup_inode_bmap,
.scrub = xfs_scrub_bmap_data,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_inode_bmap,
.scrub = xchk_bmap_data,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_BMBTA] = { /* inode attr fork */
.type = ST_INODE,
.setup = xfs_scrub_setup_inode_bmap,
.scrub = xfs_scrub_bmap_attr,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_inode_bmap,
.scrub = xchk_bmap_attr,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_BMBTC] = { /* inode CoW fork */
.type = ST_INODE,
.setup = xfs_scrub_setup_inode_bmap,
.scrub = xfs_scrub_bmap_cow,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_inode_bmap,
.scrub = xchk_bmap_cow,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_DIR] = { /* directory */
.type = ST_INODE,
.setup = xfs_scrub_setup_directory,
.scrub = xfs_scrub_directory,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_directory,
.scrub = xchk_directory,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_XATTR] = { /* extended attributes */
.type = ST_INODE,
.setup = xfs_scrub_setup_xattr,
.scrub = xfs_scrub_xattr,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_xattr,
.scrub = xchk_xattr,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_SYMLINK] = { /* symbolic link */
.type = ST_INODE,
.setup = xfs_scrub_setup_symlink,
.scrub = xfs_scrub_symlink,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_symlink,
.scrub = xchk_symlink,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_PARENT] = { /* parent pointers */
.type = ST_INODE,
.setup = xfs_scrub_setup_parent,
.scrub = xfs_scrub_parent,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_parent,
.scrub = xchk_parent,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_RTBITMAP] = { /* realtime bitmap */
.type = ST_FS,
.setup = xfs_scrub_setup_rt,
.scrub = xfs_scrub_rtbitmap,
.setup = xchk_setup_rt,
.scrub = xchk_rtbitmap,
.has = xfs_sb_version_hasrealtime,
.repair = xfs_repair_notsupported,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_RTSUM] = { /* realtime summary */
.type = ST_FS,
.setup = xfs_scrub_setup_rt,
.scrub = xfs_scrub_rtsummary,
.setup = xchk_setup_rt,
.scrub = xchk_rtsummary,
.has = xfs_sb_version_hasrealtime,
.repair = xfs_repair_notsupported,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_UQUOTA] = { /* user quota */
.type = ST_FS,
.setup = xfs_scrub_setup_quota,
.scrub = xfs_scrub_quota,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_quota,
.scrub = xchk_quota,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_GQUOTA] = { /* group quota */
.type = ST_FS,
.setup = xfs_scrub_setup_quota,
.scrub = xfs_scrub_quota,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_quota,
.scrub = xchk_quota,
.repair = xrep_notsupported,
},
[XFS_SCRUB_TYPE_PQUOTA] = { /* project quota */
.type = ST_FS,
.setup = xfs_scrub_setup_quota,
.scrub = xfs_scrub_quota,
.repair = xfs_repair_notsupported,
.setup = xchk_setup_quota,
.scrub = xchk_quota,
.repair = xrep_notsupported,
},
};
/* This isn't a stable feature, warn once per day. */
static inline void
xfs_scrub_experimental_warning(
xchk_experimental_warning(
struct xfs_mount *mp)
{
static struct ratelimit_state scrub_warning = RATELIMIT_STATE_INIT(
"xfs_scrub_warning", 86400 * HZ, 1);
"xchk_warning", 86400 * HZ, 1);
ratelimit_set_flags(&scrub_warning, RATELIMIT_MSG_ON_RELEASE);
if (__ratelimit(&scrub_warning))
@ -358,12 +364,12 @@ xfs_scrub_experimental_warning(
}
static int
xfs_scrub_validate_inputs(
xchk_validate_inputs(
struct xfs_mount *mp,
struct xfs_scrub_metadata *sm)
{
int error;
const struct xfs_scrub_meta_ops *ops;
const struct xchk_meta_ops *ops;
error = -EINVAL;
/* Check our inputs. */
@ -441,7 +447,7 @@ out:
}
#ifdef CONFIG_XFS_ONLINE_REPAIR
static inline void xfs_scrub_postmortem(struct xfs_scrub_context *sc)
static inline void xchk_postmortem(struct xfs_scrub *sc)
{
/*
* Userspace asked us to repair something, we repaired it, rescanned
@ -451,10 +457,10 @@ static inline void xfs_scrub_postmortem(struct xfs_scrub_context *sc)
if ((sc->sm->sm_flags & XFS_SCRUB_IFLAG_REPAIR) &&
(sc->sm->sm_flags & (XFS_SCRUB_OFLAG_CORRUPT |
XFS_SCRUB_OFLAG_XCORRUPT)))
xfs_repair_failure(sc->mp);
xrep_failure(sc->mp);
}
#else
static inline void xfs_scrub_postmortem(struct xfs_scrub_context *sc)
static inline void xchk_postmortem(struct xfs_scrub *sc)
{
/*
* Userspace asked us to scrub something, it's broken, and we have no
@ -473,16 +479,16 @@ xfs_scrub_metadata(
struct xfs_inode *ip,
struct xfs_scrub_metadata *sm)
{
struct xfs_scrub_context sc;
struct xfs_scrub sc;
struct xfs_mount *mp = ip->i_mount;
bool try_harder = false;
bool already_fixed = false;
int error = 0;
BUILD_BUG_ON(sizeof(meta_scrub_ops) !=
(sizeof(struct xfs_scrub_meta_ops) * XFS_SCRUB_TYPE_NR));
(sizeof(struct xchk_meta_ops) * XFS_SCRUB_TYPE_NR));
trace_xfs_scrub_start(ip, sm, error);
trace_xchk_start(ip, sm, error);
/* Forbidden if we are shut down or mounted norecovery. */
error = -ESHUTDOWN;
@ -492,11 +498,11 @@ xfs_scrub_metadata(
if (mp->m_flags & XFS_MOUNT_NORECOVERY)
goto out;
error = xfs_scrub_validate_inputs(mp, sm);
error = xchk_validate_inputs(mp, sm);
if (error)
goto out;
xfs_scrub_experimental_warning(mp);
xchk_experimental_warning(mp);
retry_op:
/* Set up for the operation. */
@ -518,7 +524,7 @@ retry_op:
* Tear down everything we hold, then set up again with
* preparation for worst-case scenarios.
*/
error = xfs_scrub_teardown(&sc, ip, 0);
error = xchk_teardown(&sc, ip, 0);
if (error)
goto out;
try_harder = true;
@ -549,13 +555,13 @@ retry_op:
* If it's broken, userspace wants us to fix it, and we haven't
* already tried to fix it, then attempt a repair.
*/
error = xfs_repair_attempt(ip, &sc, &already_fixed);
error = xrep_attempt(ip, &sc, &already_fixed);
if (error == -EAGAIN) {
if (sc.try_harder)
try_harder = true;
error = xfs_scrub_teardown(&sc, ip, 0);
error = xchk_teardown(&sc, ip, 0);
if (error) {
xfs_repair_failure(mp);
xrep_failure(mp);
goto out;
}
goto retry_op;
@ -563,11 +569,11 @@ retry_op:
}
out_nofix:
xfs_scrub_postmortem(&sc);
xchk_postmortem(&sc);
out_teardown:
error = xfs_scrub_teardown(&sc, ip, error);
error = xchk_teardown(&sc, ip, error);
out:
trace_xfs_scrub_done(ip, sm, error);
trace_xchk_done(ip, sm, error);
if (error == -EFSCORRUPTED || error == -EFSBADCRC) {
sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
error = 0;

Просмотреть файл

@ -6,58 +6,58 @@
#ifndef __XFS_SCRUB_SCRUB_H__
#define __XFS_SCRUB_SCRUB_H__
struct xfs_scrub_context;
struct xfs_scrub;
/* Type info and names for the scrub types. */
enum xfs_scrub_type {
enum xchk_type {
ST_NONE = 1, /* disabled */
ST_PERAG, /* per-AG metadata */
ST_FS, /* per-FS metadata */
ST_INODE, /* per-inode metadata */
};
struct xfs_scrub_meta_ops {
struct xchk_meta_ops {
/* Acquire whatever resources are needed for the operation. */
int (*setup)(struct xfs_scrub_context *,
int (*setup)(struct xfs_scrub *,
struct xfs_inode *);
/* Examine metadata for errors. */
int (*scrub)(struct xfs_scrub_context *);
int (*scrub)(struct xfs_scrub *);
/* Repair or optimize the metadata. */
int (*repair)(struct xfs_scrub_context *);
int (*repair)(struct xfs_scrub *);
/* Decide if we even have this piece of metadata. */
bool (*has)(struct xfs_sb *);
/* type describing required/allowed inputs */
enum xfs_scrub_type type;
enum xchk_type type;
};
/* Buffer pointers and btree cursors for an entire AG. */
struct xfs_scrub_ag {
xfs_agnumber_t agno;
struct xfs_perag *pag;
struct xchk_ag {
xfs_agnumber_t agno;
struct xfs_perag *pag;
/* AG btree roots */
struct xfs_buf *agf_bp;
struct xfs_buf *agfl_bp;
struct xfs_buf *agi_bp;
struct xfs_buf *agf_bp;
struct xfs_buf *agfl_bp;
struct xfs_buf *agi_bp;
/* AG btrees */
struct xfs_btree_cur *bno_cur;
struct xfs_btree_cur *cnt_cur;
struct xfs_btree_cur *ino_cur;
struct xfs_btree_cur *fino_cur;
struct xfs_btree_cur *rmap_cur;
struct xfs_btree_cur *refc_cur;
struct xfs_btree_cur *bno_cur;
struct xfs_btree_cur *cnt_cur;
struct xfs_btree_cur *ino_cur;
struct xfs_btree_cur *fino_cur;
struct xfs_btree_cur *rmap_cur;
struct xfs_btree_cur *refc_cur;
};
struct xfs_scrub_context {
struct xfs_scrub {
/* General scrub state. */
struct xfs_mount *mp;
struct xfs_scrub_metadata *sm;
const struct xfs_scrub_meta_ops *ops;
const struct xchk_meta_ops *ops;
struct xfs_trans *tp;
struct xfs_inode *ip;
void *buf;
@ -66,78 +66,76 @@ struct xfs_scrub_context {
bool has_quotaofflock;
/* State tracking for single-AG operations. */
struct xfs_scrub_ag sa;
struct xchk_ag sa;
};
/* Metadata scrubbers */
int xfs_scrub_tester(struct xfs_scrub_context *sc);
int xfs_scrub_superblock(struct xfs_scrub_context *sc);
int xfs_scrub_agf(struct xfs_scrub_context *sc);
int xfs_scrub_agfl(struct xfs_scrub_context *sc);
int xfs_scrub_agi(struct xfs_scrub_context *sc);
int xfs_scrub_bnobt(struct xfs_scrub_context *sc);
int xfs_scrub_cntbt(struct xfs_scrub_context *sc);
int xfs_scrub_inobt(struct xfs_scrub_context *sc);
int xfs_scrub_finobt(struct xfs_scrub_context *sc);
int xfs_scrub_rmapbt(struct xfs_scrub_context *sc);
int xfs_scrub_refcountbt(struct xfs_scrub_context *sc);
int xfs_scrub_inode(struct xfs_scrub_context *sc);
int xfs_scrub_bmap_data(struct xfs_scrub_context *sc);
int xfs_scrub_bmap_attr(struct xfs_scrub_context *sc);
int xfs_scrub_bmap_cow(struct xfs_scrub_context *sc);
int xfs_scrub_directory(struct xfs_scrub_context *sc);
int xfs_scrub_xattr(struct xfs_scrub_context *sc);
int xfs_scrub_symlink(struct xfs_scrub_context *sc);
int xfs_scrub_parent(struct xfs_scrub_context *sc);
int xchk_tester(struct xfs_scrub *sc);
int xchk_superblock(struct xfs_scrub *sc);
int xchk_agf(struct xfs_scrub *sc);
int xchk_agfl(struct xfs_scrub *sc);
int xchk_agi(struct xfs_scrub *sc);
int xchk_bnobt(struct xfs_scrub *sc);
int xchk_cntbt(struct xfs_scrub *sc);
int xchk_inobt(struct xfs_scrub *sc);
int xchk_finobt(struct xfs_scrub *sc);
int xchk_rmapbt(struct xfs_scrub *sc);
int xchk_refcountbt(struct xfs_scrub *sc);
int xchk_inode(struct xfs_scrub *sc);
int xchk_bmap_data(struct xfs_scrub *sc);
int xchk_bmap_attr(struct xfs_scrub *sc);
int xchk_bmap_cow(struct xfs_scrub *sc);
int xchk_directory(struct xfs_scrub *sc);
int xchk_xattr(struct xfs_scrub *sc);
int xchk_symlink(struct xfs_scrub *sc);
int xchk_parent(struct xfs_scrub *sc);
#ifdef CONFIG_XFS_RT
int xfs_scrub_rtbitmap(struct xfs_scrub_context *sc);
int xfs_scrub_rtsummary(struct xfs_scrub_context *sc);
int xchk_rtbitmap(struct xfs_scrub *sc);
int xchk_rtsummary(struct xfs_scrub *sc);
#else
static inline int
xfs_scrub_rtbitmap(struct xfs_scrub_context *sc)
xchk_rtbitmap(struct xfs_scrub *sc)
{
return -ENOENT;
}
static inline int
xfs_scrub_rtsummary(struct xfs_scrub_context *sc)
xchk_rtsummary(struct xfs_scrub *sc)
{
return -ENOENT;
}
#endif
#ifdef CONFIG_XFS_QUOTA
int xfs_scrub_quota(struct xfs_scrub_context *sc);
int xchk_quota(struct xfs_scrub *sc);
#else
static inline int
xfs_scrub_quota(struct xfs_scrub_context *sc)
xchk_quota(struct xfs_scrub *sc)
{
return -ENOENT;
}
#endif
/* cross-referencing helpers */
void xfs_scrub_xref_is_used_space(struct xfs_scrub_context *sc,
xfs_agblock_t agbno, xfs_extlen_t len);
void xfs_scrub_xref_is_not_inode_chunk(struct xfs_scrub_context *sc,
xfs_agblock_t agbno, xfs_extlen_t len);
void xfs_scrub_xref_is_inode_chunk(struct xfs_scrub_context *sc,
xfs_agblock_t agbno, xfs_extlen_t len);
void xfs_scrub_xref_is_owned_by(struct xfs_scrub_context *sc,
xfs_agblock_t agbno, xfs_extlen_t len,
struct xfs_owner_info *oinfo);
void xfs_scrub_xref_is_not_owned_by(struct xfs_scrub_context *sc,
xfs_agblock_t agbno, xfs_extlen_t len,
struct xfs_owner_info *oinfo);
void xfs_scrub_xref_has_no_owner(struct xfs_scrub_context *sc,
xfs_agblock_t agbno, xfs_extlen_t len);
void xfs_scrub_xref_is_cow_staging(struct xfs_scrub_context *sc,
xfs_agblock_t bno, xfs_extlen_t len);
void xfs_scrub_xref_is_not_shared(struct xfs_scrub_context *sc,
xfs_agblock_t bno, xfs_extlen_t len);
void xchk_xref_is_used_space(struct xfs_scrub *sc, xfs_agblock_t agbno,
xfs_extlen_t len);
void xchk_xref_is_not_inode_chunk(struct xfs_scrub *sc, xfs_agblock_t agbno,
xfs_extlen_t len);
void xchk_xref_is_inode_chunk(struct xfs_scrub *sc, xfs_agblock_t agbno,
xfs_extlen_t len);
void xchk_xref_is_owned_by(struct xfs_scrub *sc, xfs_agblock_t agbno,
xfs_extlen_t len, struct xfs_owner_info *oinfo);
void xchk_xref_is_not_owned_by(struct xfs_scrub *sc, xfs_agblock_t agbno,
xfs_extlen_t len, struct xfs_owner_info *oinfo);
void xchk_xref_has_no_owner(struct xfs_scrub *sc, xfs_agblock_t agbno,
xfs_extlen_t len);
void xchk_xref_is_cow_staging(struct xfs_scrub *sc, xfs_agblock_t bno,
xfs_extlen_t len);
void xchk_xref_is_not_shared(struct xfs_scrub *sc, xfs_agblock_t bno,
xfs_extlen_t len);
#ifdef CONFIG_XFS_RT
void xfs_scrub_xref_is_used_rt_space(struct xfs_scrub_context *sc,
xfs_rtblock_t rtbno, xfs_extlen_t len);
void xchk_xref_is_used_rt_space(struct xfs_scrub *sc, xfs_rtblock_t rtbno,
xfs_extlen_t len);
#else
# define xfs_scrub_xref_is_used_rt_space(sc, rtbno, len) do { } while (0)
# define xchk_xref_is_used_rt_space(sc, rtbno, len) do { } while (0)
#endif
#endif /* __XFS_SCRUB_SCRUB_H__ */

Просмотреть файл

@ -25,28 +25,28 @@
/* Set us up to scrub a symbolic link. */
int
xfs_scrub_setup_symlink(
struct xfs_scrub_context *sc,
struct xfs_inode *ip)
xchk_setup_symlink(
struct xfs_scrub *sc,
struct xfs_inode *ip)
{
/* Allocate the buffer without the inode lock held. */
sc->buf = kmem_zalloc_large(XFS_SYMLINK_MAXLEN + 1, KM_SLEEP);
if (!sc->buf)
return -ENOMEM;
return xfs_scrub_setup_inode_contents(sc, ip, 0);
return xchk_setup_inode_contents(sc, ip, 0);
}
/* Symbolic links. */
int
xfs_scrub_symlink(
struct xfs_scrub_context *sc)
xchk_symlink(
struct xfs_scrub *sc)
{
struct xfs_inode *ip = sc->ip;
struct xfs_ifork *ifp;
loff_t len;
int error = 0;
struct xfs_inode *ip = sc->ip;
struct xfs_ifork *ifp;
loff_t len;
int error = 0;
if (!S_ISLNK(VFS_I(ip)->i_mode))
return -ENOENT;
@ -55,7 +55,7 @@ xfs_scrub_symlink(
/* Plausible size? */
if (len > XFS_SYMLINK_MAXLEN || len <= 0) {
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
goto out;
}
@ -63,16 +63,16 @@ xfs_scrub_symlink(
if (ifp->if_flags & XFS_IFINLINE) {
if (len > XFS_IFORK_DSIZE(ip) ||
len > strnlen(ifp->if_u1.if_data, XFS_IFORK_DSIZE(ip)))
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
goto out;
}
/* Remote symlink; must read the contents. */
error = xfs_readlink_bmap_ilocked(sc->ip, sc->buf);
if (!xfs_scrub_fblock_process_error(sc, XFS_DATA_FORK, 0, &error))
if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, 0, &error))
goto out;
if (strnlen(sc->buf, XFS_SYMLINK_MAXLEN) < len)
xfs_scrub_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 0);
out:
return error;
}

Просмотреть файл

@ -22,9 +22,9 @@
/* Figure out which block the btree cursor was pointing to. */
static inline xfs_fsblock_t
xfs_scrub_btree_cur_fsbno(
struct xfs_btree_cur *cur,
int level)
xchk_btree_cur_fsbno(
struct xfs_btree_cur *cur,
int level)
{
if (level < cur->bc_nlevels && cur->bc_bufs[level])
return XFS_DADDR_TO_FSB(cur->bc_mp, cur->bc_bufs[level]->b_bn);

Просмотреть файл

@ -12,7 +12,7 @@
#include <linux/tracepoint.h>
#include "xfs_bit.h"
DECLARE_EVENT_CLASS(xfs_scrub_class,
DECLARE_EVENT_CLASS(xchk_class,
TP_PROTO(struct xfs_inode *ip, struct xfs_scrub_metadata *sm,
int error),
TP_ARGS(ip, sm, error),
@ -47,19 +47,19 @@ DECLARE_EVENT_CLASS(xfs_scrub_class,
__entry->error)
)
#define DEFINE_SCRUB_EVENT(name) \
DEFINE_EVENT(xfs_scrub_class, name, \
DEFINE_EVENT(xchk_class, name, \
TP_PROTO(struct xfs_inode *ip, struct xfs_scrub_metadata *sm, \
int error), \
TP_ARGS(ip, sm, error))
DEFINE_SCRUB_EVENT(xfs_scrub_start);
DEFINE_SCRUB_EVENT(xfs_scrub_done);
DEFINE_SCRUB_EVENT(xfs_scrub_deadlock_retry);
DEFINE_SCRUB_EVENT(xfs_repair_attempt);
DEFINE_SCRUB_EVENT(xfs_repair_done);
DEFINE_SCRUB_EVENT(xchk_start);
DEFINE_SCRUB_EVENT(xchk_done);
DEFINE_SCRUB_EVENT(xchk_deadlock_retry);
DEFINE_SCRUB_EVENT(xrep_attempt);
DEFINE_SCRUB_EVENT(xrep_done);
TRACE_EVENT(xfs_scrub_op_error,
TP_PROTO(struct xfs_scrub_context *sc, xfs_agnumber_t agno,
TRACE_EVENT(xchk_op_error,
TP_PROTO(struct xfs_scrub *sc, xfs_agnumber_t agno,
xfs_agblock_t bno, int error, void *ret_ip),
TP_ARGS(sc, agno, bno, error, ret_ip),
TP_STRUCT__entry(
@ -87,8 +87,8 @@ TRACE_EVENT(xfs_scrub_op_error,
__entry->ret_ip)
);
TRACE_EVENT(xfs_scrub_file_op_error,
TP_PROTO(struct xfs_scrub_context *sc, int whichfork,
TRACE_EVENT(xchk_file_op_error,
TP_PROTO(struct xfs_scrub *sc, int whichfork,
xfs_fileoff_t offset, int error, void *ret_ip),
TP_ARGS(sc, whichfork, offset, error, ret_ip),
TP_STRUCT__entry(
@ -119,8 +119,8 @@ TRACE_EVENT(xfs_scrub_file_op_error,
__entry->ret_ip)
);
DECLARE_EVENT_CLASS(xfs_scrub_block_error_class,
TP_PROTO(struct xfs_scrub_context *sc, xfs_daddr_t daddr, void *ret_ip),
DECLARE_EVENT_CLASS(xchk_block_error_class,
TP_PROTO(struct xfs_scrub *sc, xfs_daddr_t daddr, void *ret_ip),
TP_ARGS(sc, daddr, ret_ip),
TP_STRUCT__entry(
__field(dev_t, dev)
@ -153,16 +153,16 @@ DECLARE_EVENT_CLASS(xfs_scrub_block_error_class,
)
#define DEFINE_SCRUB_BLOCK_ERROR_EVENT(name) \
DEFINE_EVENT(xfs_scrub_block_error_class, name, \
TP_PROTO(struct xfs_scrub_context *sc, xfs_daddr_t daddr, \
DEFINE_EVENT(xchk_block_error_class, name, \
TP_PROTO(struct xfs_scrub *sc, xfs_daddr_t daddr, \
void *ret_ip), \
TP_ARGS(sc, daddr, ret_ip))
DEFINE_SCRUB_BLOCK_ERROR_EVENT(xfs_scrub_block_error);
DEFINE_SCRUB_BLOCK_ERROR_EVENT(xfs_scrub_block_preen);
DEFINE_SCRUB_BLOCK_ERROR_EVENT(xchk_block_error);
DEFINE_SCRUB_BLOCK_ERROR_EVENT(xchk_block_preen);
DECLARE_EVENT_CLASS(xfs_scrub_ino_error_class,
TP_PROTO(struct xfs_scrub_context *sc, xfs_ino_t ino, void *ret_ip),
DECLARE_EVENT_CLASS(xchk_ino_error_class,
TP_PROTO(struct xfs_scrub *sc, xfs_ino_t ino, void *ret_ip),
TP_ARGS(sc, ino, ret_ip),
TP_STRUCT__entry(
__field(dev_t, dev)
@ -184,17 +184,17 @@ DECLARE_EVENT_CLASS(xfs_scrub_ino_error_class,
)
#define DEFINE_SCRUB_INO_ERROR_EVENT(name) \
DEFINE_EVENT(xfs_scrub_ino_error_class, name, \
TP_PROTO(struct xfs_scrub_context *sc, xfs_ino_t ino, \
DEFINE_EVENT(xchk_ino_error_class, name, \
TP_PROTO(struct xfs_scrub *sc, xfs_ino_t ino, \
void *ret_ip), \
TP_ARGS(sc, ino, ret_ip))
DEFINE_SCRUB_INO_ERROR_EVENT(xfs_scrub_ino_error);
DEFINE_SCRUB_INO_ERROR_EVENT(xfs_scrub_ino_preen);
DEFINE_SCRUB_INO_ERROR_EVENT(xfs_scrub_ino_warning);
DEFINE_SCRUB_INO_ERROR_EVENT(xchk_ino_error);
DEFINE_SCRUB_INO_ERROR_EVENT(xchk_ino_preen);
DEFINE_SCRUB_INO_ERROR_EVENT(xchk_ino_warning);
DECLARE_EVENT_CLASS(xfs_scrub_fblock_error_class,
TP_PROTO(struct xfs_scrub_context *sc, int whichfork,
DECLARE_EVENT_CLASS(xchk_fblock_error_class,
TP_PROTO(struct xfs_scrub *sc, int whichfork,
xfs_fileoff_t offset, void *ret_ip),
TP_ARGS(sc, whichfork, offset, ret_ip),
TP_STRUCT__entry(
@ -223,16 +223,16 @@ DECLARE_EVENT_CLASS(xfs_scrub_fblock_error_class,
);
#define DEFINE_SCRUB_FBLOCK_ERROR_EVENT(name) \
DEFINE_EVENT(xfs_scrub_fblock_error_class, name, \
TP_PROTO(struct xfs_scrub_context *sc, int whichfork, \
DEFINE_EVENT(xchk_fblock_error_class, name, \
TP_PROTO(struct xfs_scrub *sc, int whichfork, \
xfs_fileoff_t offset, void *ret_ip), \
TP_ARGS(sc, whichfork, offset, ret_ip))
DEFINE_SCRUB_FBLOCK_ERROR_EVENT(xfs_scrub_fblock_error);
DEFINE_SCRUB_FBLOCK_ERROR_EVENT(xfs_scrub_fblock_warning);
DEFINE_SCRUB_FBLOCK_ERROR_EVENT(xchk_fblock_error);
DEFINE_SCRUB_FBLOCK_ERROR_EVENT(xchk_fblock_warning);
TRACE_EVENT(xfs_scrub_incomplete,
TP_PROTO(struct xfs_scrub_context *sc, void *ret_ip),
TRACE_EVENT(xchk_incomplete,
TP_PROTO(struct xfs_scrub *sc, void *ret_ip),
TP_ARGS(sc, ret_ip),
TP_STRUCT__entry(
__field(dev_t, dev)
@ -250,8 +250,8 @@ TRACE_EVENT(xfs_scrub_incomplete,
__entry->ret_ip)
);
TRACE_EVENT(xfs_scrub_btree_op_error,
TP_PROTO(struct xfs_scrub_context *sc, struct xfs_btree_cur *cur,
TRACE_EVENT(xchk_btree_op_error,
TP_PROTO(struct xfs_scrub *sc, struct xfs_btree_cur *cur,
int level, int error, void *ret_ip),
TP_ARGS(sc, cur, level, error, ret_ip),
TP_STRUCT__entry(
@ -266,7 +266,7 @@ TRACE_EVENT(xfs_scrub_btree_op_error,
__field(void *, ret_ip)
),
TP_fast_assign(
xfs_fsblock_t fsbno = xfs_scrub_btree_cur_fsbno(cur, level);
xfs_fsblock_t fsbno = xchk_btree_cur_fsbno(cur, level);
__entry->dev = sc->mp->m_super->s_dev;
__entry->type = sc->sm->sm_type;
@ -290,8 +290,8 @@ TRACE_EVENT(xfs_scrub_btree_op_error,
__entry->ret_ip)
);
TRACE_EVENT(xfs_scrub_ifork_btree_op_error,
TP_PROTO(struct xfs_scrub_context *sc, struct xfs_btree_cur *cur,
TRACE_EVENT(xchk_ifork_btree_op_error,
TP_PROTO(struct xfs_scrub *sc, struct xfs_btree_cur *cur,
int level, int error, void *ret_ip),
TP_ARGS(sc, cur, level, error, ret_ip),
TP_STRUCT__entry(
@ -308,7 +308,7 @@ TRACE_EVENT(xfs_scrub_ifork_btree_op_error,
__field(void *, ret_ip)
),
TP_fast_assign(
xfs_fsblock_t fsbno = xfs_scrub_btree_cur_fsbno(cur, level);
xfs_fsblock_t fsbno = xchk_btree_cur_fsbno(cur, level);
__entry->dev = sc->mp->m_super->s_dev;
__entry->ino = sc->ip->i_ino;
__entry->whichfork = cur->bc_private.b.whichfork;
@ -335,8 +335,8 @@ TRACE_EVENT(xfs_scrub_ifork_btree_op_error,
__entry->ret_ip)
);
TRACE_EVENT(xfs_scrub_btree_error,
TP_PROTO(struct xfs_scrub_context *sc, struct xfs_btree_cur *cur,
TRACE_EVENT(xchk_btree_error,
TP_PROTO(struct xfs_scrub *sc, struct xfs_btree_cur *cur,
int level, void *ret_ip),
TP_ARGS(sc, cur, level, ret_ip),
TP_STRUCT__entry(
@ -350,7 +350,7 @@ TRACE_EVENT(xfs_scrub_btree_error,
__field(void *, ret_ip)
),
TP_fast_assign(
xfs_fsblock_t fsbno = xfs_scrub_btree_cur_fsbno(cur, level);
xfs_fsblock_t fsbno = xchk_btree_cur_fsbno(cur, level);
__entry->dev = sc->mp->m_super->s_dev;
__entry->type = sc->sm->sm_type;
__entry->btnum = cur->bc_btnum;
@ -371,8 +371,8 @@ TRACE_EVENT(xfs_scrub_btree_error,
__entry->ret_ip)
);
TRACE_EVENT(xfs_scrub_ifork_btree_error,
TP_PROTO(struct xfs_scrub_context *sc, struct xfs_btree_cur *cur,
TRACE_EVENT(xchk_ifork_btree_error,
TP_PROTO(struct xfs_scrub *sc, struct xfs_btree_cur *cur,
int level, void *ret_ip),
TP_ARGS(sc, cur, level, ret_ip),
TP_STRUCT__entry(
@ -388,7 +388,7 @@ TRACE_EVENT(xfs_scrub_ifork_btree_error,
__field(void *, ret_ip)
),
TP_fast_assign(
xfs_fsblock_t fsbno = xfs_scrub_btree_cur_fsbno(cur, level);
xfs_fsblock_t fsbno = xchk_btree_cur_fsbno(cur, level);
__entry->dev = sc->mp->m_super->s_dev;
__entry->ino = sc->ip->i_ino;
__entry->whichfork = cur->bc_private.b.whichfork;
@ -413,8 +413,8 @@ TRACE_EVENT(xfs_scrub_ifork_btree_error,
__entry->ret_ip)
);
DECLARE_EVENT_CLASS(xfs_scrub_sbtree_class,
TP_PROTO(struct xfs_scrub_context *sc, struct xfs_btree_cur *cur,
DECLARE_EVENT_CLASS(xchk_sbtree_class,
TP_PROTO(struct xfs_scrub *sc, struct xfs_btree_cur *cur,
int level),
TP_ARGS(sc, cur, level),
TP_STRUCT__entry(
@ -428,7 +428,7 @@ DECLARE_EVENT_CLASS(xfs_scrub_sbtree_class,
__field(int, ptr)
),
TP_fast_assign(
xfs_fsblock_t fsbno = xfs_scrub_btree_cur_fsbno(cur, level);
xfs_fsblock_t fsbno = xchk_btree_cur_fsbno(cur, level);
__entry->dev = sc->mp->m_super->s_dev;
__entry->type = sc->sm->sm_type;
@ -450,16 +450,16 @@ DECLARE_EVENT_CLASS(xfs_scrub_sbtree_class,
__entry->ptr)
)
#define DEFINE_SCRUB_SBTREE_EVENT(name) \
DEFINE_EVENT(xfs_scrub_sbtree_class, name, \
TP_PROTO(struct xfs_scrub_context *sc, struct xfs_btree_cur *cur, \
DEFINE_EVENT(xchk_sbtree_class, name, \
TP_PROTO(struct xfs_scrub *sc, struct xfs_btree_cur *cur, \
int level), \
TP_ARGS(sc, cur, level))
DEFINE_SCRUB_SBTREE_EVENT(xfs_scrub_btree_rec);
DEFINE_SCRUB_SBTREE_EVENT(xfs_scrub_btree_key);
DEFINE_SCRUB_SBTREE_EVENT(xchk_btree_rec);
DEFINE_SCRUB_SBTREE_EVENT(xchk_btree_key);
TRACE_EVENT(xfs_scrub_xref_error,
TP_PROTO(struct xfs_scrub_context *sc, int error, void *ret_ip),
TRACE_EVENT(xchk_xref_error,
TP_PROTO(struct xfs_scrub *sc, int error, void *ret_ip),
TP_ARGS(sc, error, ret_ip),
TP_STRUCT__entry(
__field(dev_t, dev)
@ -483,7 +483,7 @@ TRACE_EVENT(xfs_scrub_xref_error,
/* repair tracepoints */
#if IS_ENABLED(CONFIG_XFS_ONLINE_REPAIR)
DECLARE_EVENT_CLASS(xfs_repair_extent_class,
DECLARE_EVENT_CLASS(xrep_extent_class,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno,
xfs_agblock_t agbno, xfs_extlen_t len),
TP_ARGS(mp, agno, agbno, len),
@ -506,15 +506,14 @@ DECLARE_EVENT_CLASS(xfs_repair_extent_class,
__entry->len)
);
#define DEFINE_REPAIR_EXTENT_EVENT(name) \
DEFINE_EVENT(xfs_repair_extent_class, name, \
DEFINE_EVENT(xrep_extent_class, name, \
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, \
xfs_agblock_t agbno, xfs_extlen_t len), \
TP_ARGS(mp, agno, agbno, len))
DEFINE_REPAIR_EXTENT_EVENT(xfs_repair_dispose_btree_extent);
DEFINE_REPAIR_EXTENT_EVENT(xfs_repair_collect_btree_extent);
DEFINE_REPAIR_EXTENT_EVENT(xfs_repair_agfl_insert);
DEFINE_REPAIR_EXTENT_EVENT(xrep_dispose_btree_extent);
DEFINE_REPAIR_EXTENT_EVENT(xrep_agfl_insert);
DECLARE_EVENT_CLASS(xfs_repair_rmap_class,
DECLARE_EVENT_CLASS(xrep_rmap_class,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno,
xfs_agblock_t agbno, xfs_extlen_t len,
uint64_t owner, uint64_t offset, unsigned int flags),
@ -547,17 +546,17 @@ DECLARE_EVENT_CLASS(xfs_repair_rmap_class,
__entry->flags)
);
#define DEFINE_REPAIR_RMAP_EVENT(name) \
DEFINE_EVENT(xfs_repair_rmap_class, name, \
DEFINE_EVENT(xrep_rmap_class, name, \
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, \
xfs_agblock_t agbno, xfs_extlen_t len, \
uint64_t owner, uint64_t offset, unsigned int flags), \
TP_ARGS(mp, agno, agbno, len, owner, offset, flags))
DEFINE_REPAIR_RMAP_EVENT(xfs_repair_alloc_extent_fn);
DEFINE_REPAIR_RMAP_EVENT(xfs_repair_ialloc_extent_fn);
DEFINE_REPAIR_RMAP_EVENT(xfs_repair_rmap_extent_fn);
DEFINE_REPAIR_RMAP_EVENT(xfs_repair_bmap_extent_fn);
DEFINE_REPAIR_RMAP_EVENT(xrep_alloc_extent_fn);
DEFINE_REPAIR_RMAP_EVENT(xrep_ialloc_extent_fn);
DEFINE_REPAIR_RMAP_EVENT(xrep_rmap_extent_fn);
DEFINE_REPAIR_RMAP_EVENT(xrep_bmap_extent_fn);
TRACE_EVENT(xfs_repair_refcount_extent_fn,
TRACE_EVENT(xrep_refcount_extent_fn,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno,
struct xfs_refcount_irec *irec),
TP_ARGS(mp, agno, irec),
@ -583,7 +582,7 @@ TRACE_EVENT(xfs_repair_refcount_extent_fn,
__entry->refcount)
)
TRACE_EVENT(xfs_repair_init_btblock,
TRACE_EVENT(xrep_init_btblock,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, xfs_agblock_t agbno,
xfs_btnum_t btnum),
TP_ARGS(mp, agno, agbno, btnum),
@ -605,7 +604,7 @@ TRACE_EVENT(xfs_repair_init_btblock,
__entry->agbno,
__entry->btnum)
)
TRACE_EVENT(xfs_repair_findroot_block,
TRACE_EVENT(xrep_findroot_block,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, xfs_agblock_t agbno,
uint32_t magic, uint16_t level),
TP_ARGS(mp, agno, agbno, magic, level),
@ -630,7 +629,7 @@ TRACE_EVENT(xfs_repair_findroot_block,
__entry->magic,
__entry->level)
)
TRACE_EVENT(xfs_repair_calc_ag_resblks,
TRACE_EVENT(xrep_calc_ag_resblks,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno,
xfs_agino_t icount, xfs_agblock_t aglen, xfs_agblock_t freelen,
xfs_agblock_t usedlen),
@ -659,7 +658,7 @@ TRACE_EVENT(xfs_repair_calc_ag_resblks,
__entry->freelen,
__entry->usedlen)
)
TRACE_EVENT(xfs_repair_calc_ag_resblks_btsize,
TRACE_EVENT(xrep_calc_ag_resblks_btsize,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno,
xfs_agblock_t bnobt_sz, xfs_agblock_t inobt_sz,
xfs_agblock_t rmapbt_sz, xfs_agblock_t refcbt_sz),
@ -688,7 +687,7 @@ TRACE_EVENT(xfs_repair_calc_ag_resblks_btsize,
__entry->rmapbt_sz,
__entry->refcbt_sz)
)
TRACE_EVENT(xfs_repair_reset_counters,
TRACE_EVENT(xrep_reset_counters,
TP_PROTO(struct xfs_mount *mp),
TP_ARGS(mp),
TP_STRUCT__entry(
@ -701,7 +700,7 @@ TRACE_EVENT(xfs_repair_reset_counters,
MAJOR(__entry->dev), MINOR(__entry->dev))
)
TRACE_EVENT(xfs_repair_ialloc_insert,
TRACE_EVENT(xrep_ialloc_insert,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno,
xfs_agino_t startino, uint16_t holemask, uint8_t count,
uint8_t freecount, uint64_t freemask),

Просмотреть файл

@ -8,7 +8,6 @@
#ifdef CONFIG_XFS_DEBUG
#define DEBUG 1
#define XFS_BUF_LOCK_TRACKING 1
#endif
#ifdef CONFIG_XFS_ASSERT_FATAL

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -17,6 +17,7 @@ enum {
XFS_IO_UNWRITTEN, /* covers allocated but uninitialized data */
XFS_IO_OVERWRITE, /* covers already allocated extent */
XFS_IO_COW, /* covers copy-on-write extent */
XFS_IO_HOLE, /* covers region without any block allocation */
};
#define XFS_IO_TYPES \
@ -24,7 +25,8 @@ enum {
{ XFS_IO_DELALLOC, "delalloc" }, \
{ XFS_IO_UNWRITTEN, "unwritten" }, \
{ XFS_IO_OVERWRITE, "overwrite" }, \
{ XFS_IO_COW, "CoW" }
{ XFS_IO_COW, "CoW" }, \
{ XFS_IO_HOLE, "hole" }
/*
* Structure for buffered I/O completions.

Просмотреть файл

@ -26,6 +26,7 @@
#include "xfs_quota.h"
#include "xfs_trace.h"
#include "xfs_dir2.h"
#include "xfs_defer.h"
/*
* Look at all the extents for this logical region,

Просмотреть файл

@ -87,7 +87,7 @@ xfs_attr_shortform_list(xfs_attr_list_context_t *context)
*/
if (context->bufsize == 0 ||
(XFS_ISRESET_CURSOR(cursor) &&
(dp->i_afp->if_bytes + sf->hdr.count * 16) < context->bufsize)) {
(dp->i_afp->if_bytes + sf->hdr.count * 16) < context->bufsize)) {
for (i = 0, sfe = &sf->list[0]; i < sf->hdr.count; i++) {
context->put_listent(context,
sfe->flags,

Просмотреть файл

@ -375,9 +375,8 @@ xfs_bud_init(
*/
int
xfs_bui_recover(
struct xfs_mount *mp,
struct xfs_bui_log_item *buip,
struct xfs_defer_ops *dfops)
struct xfs_trans *parent_tp,
struct xfs_bui_log_item *buip)
{
int error = 0;
unsigned int bui_type;
@ -393,6 +392,7 @@ xfs_bui_recover(
struct xfs_trans *tp;
struct xfs_inode *ip = NULL;
struct xfs_bmbt_irec irec;
struct xfs_mount *mp = parent_tp->t_mountp;
ASSERT(!test_bit(XFS_BUI_RECOVERED, &buip->bui_flags));
@ -441,6 +441,12 @@ xfs_bui_recover(
XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK), 0, 0, &tp);
if (error)
return error;
/*
* Recovery stashes all deferred ops during intent processing and
* finishes them on completion. Transfer current dfops state to this
* transaction and transfer the result back before we return.
*/
xfs_defer_move(tp, parent_tp);
budp = xfs_trans_get_bud(tp, buip);
/* Grab the inode. */
@ -469,9 +475,8 @@ xfs_bui_recover(
xfs_trans_ijoin(tp, ip, 0);
count = bmap->me_len;
error = xfs_trans_log_finish_bmap_update(tp, budp, dfops, type,
ip, whichfork, bmap->me_startoff,
bmap->me_startblock, &count, state);
error = xfs_trans_log_finish_bmap_update(tp, budp, type, ip, whichfork,
bmap->me_startoff, bmap->me_startblock, &count, state);
if (error)
goto err_inode;
@ -481,23 +486,25 @@ xfs_bui_recover(
irec.br_blockcount = count;
irec.br_startoff = bmap->me_startoff;
irec.br_state = state;
error = xfs_bmap_unmap_extent(tp->t_mountp, dfops, ip, &irec);
error = xfs_bmap_unmap_extent(tp, ip, &irec);
if (error)
goto err_inode;
}
set_bit(XFS_BUI_RECOVERED, &buip->bui_flags);
xfs_defer_move(parent_tp, tp);
error = xfs_trans_commit(tp);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
IRELE(ip);
xfs_irele(ip);
return error;
err_inode:
xfs_defer_move(parent_tp, tp);
xfs_trans_cancel(tp);
if (ip) {
xfs_iunlock(ip, XFS_ILOCK_EXCL);
IRELE(ip);
xfs_irele(ip);
}
return error;
}

Просмотреть файл

@ -79,7 +79,6 @@ struct xfs_bud_log_item *xfs_bud_init(struct xfs_mount *,
struct xfs_bui_log_item *);
void xfs_bui_item_free(struct xfs_bui_log_item *);
void xfs_bui_release(struct xfs_bui_log_item *);
int xfs_bui_recover(struct xfs_mount *mp, struct xfs_bui_log_item *buip,
struct xfs_defer_ops *dfops);
int xfs_bui_recover(struct xfs_trans *parent_tp, struct xfs_bui_log_item *buip);
#endif /* __XFS_BMAP_ITEM_H__ */

Просмотреть файл

@ -702,16 +702,15 @@ xfs_bmap_punch_delalloc_range(
struct xfs_iext_cursor icur;
int error = 0;
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
xfs_ilock(ip, XFS_ILOCK_EXCL);
if (!(ifp->if_flags & XFS_IFEXTENTS)) {
error = xfs_iread_extents(NULL, ip, XFS_DATA_FORK);
if (error)
return error;
goto out_unlock;
}
if (!xfs_iext_lookup_extent_before(ip, ifp, &end_fsb, &icur, &got))
return 0;
goto out_unlock;
while (got.br_startoff + got.br_blockcount > start_fsb) {
del = got;
@ -735,6 +734,8 @@ xfs_bmap_punch_delalloc_range(
break;
}
out_unlock:
xfs_iunlock(ip, XFS_ILOCK_EXCL);
return error;
}
@ -872,13 +873,11 @@ xfs_alloc_file_space(
xfs_filblks_t allocatesize_fsb;
xfs_extlen_t extsz, temp;
xfs_fileoff_t startoffset_fsb;
xfs_fsblock_t firstfsb;
int nimaps;
int quota_flag;
int rt;
xfs_trans_t *tp;
xfs_bmbt_irec_t imaps[1], *imapp;
struct xfs_defer_ops dfops;
uint qblocks, resblks, resrtextents;
int error;
@ -971,20 +970,15 @@ xfs_alloc_file_space(
xfs_trans_ijoin(tp, ip, 0);
xfs_defer_init(&dfops, &firstfsb);
error = xfs_bmapi_write(tp, ip, startoffset_fsb,
allocatesize_fsb, alloc_type, &firstfsb,
resblks, imapp, &nimaps, &dfops);
allocatesize_fsb, alloc_type, resblks,
imapp, &nimaps);
if (error)
goto error0;
/*
* Complete the transaction
*/
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto error0;
error = xfs_trans_commit(tp);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
if (error)
@ -1003,8 +997,7 @@ xfs_alloc_file_space(
return error;
error0: /* Cancel bmap, unlock inode, unreserve quota blocks, cancel trans */
xfs_defer_cancel(&dfops);
error0: /* unlock inode, unreserve quota blocks, cancel trans */
xfs_trans_unreserve_quota_nblks(tp, ip, (long)qblocks, 0, quota_flag);
error1: /* Just cancel transaction */
@ -1022,8 +1015,6 @@ xfs_unmap_extent(
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp;
struct xfs_defer_ops dfops;
xfs_fsblock_t firstfsb;
uint resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0);
int error;
@ -1041,24 +1032,15 @@ xfs_unmap_extent(
xfs_trans_ijoin(tp, ip, 0);
xfs_defer_init(&dfops, &firstfsb);
error = xfs_bunmapi(tp, ip, startoffset_fsb, len_fsb, 0, 2, &firstfsb,
&dfops, done);
error = xfs_bunmapi(tp, ip, startoffset_fsb, len_fsb, 0, 2, done);
if (error)
goto out_bmap_cancel;
xfs_defer_ijoin(&dfops, ip);
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
error = xfs_trans_commit(tp);
out_unlock:
xfs_iunlock(ip, XFS_ILOCK_EXCL);
return error;
out_bmap_cancel:
xfs_defer_cancel(&dfops);
out_trans_cancel:
xfs_trans_cancel(tp);
goto out_unlock;
@ -1279,7 +1261,7 @@ xfs_prepare_shift(
* we've flushed all the dirty data out to disk to avoid having
* CoW extents at the wrong offsets.
*/
if (xfs_is_reflink_inode(ip)) {
if (xfs_inode_has_cow_data(ip)) {
error = xfs_reflink_cancel_cow_range(ip, offset, NULLFILEOFF,
true);
if (error)
@ -1310,8 +1292,6 @@ xfs_collapse_file_space(
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp;
int error;
struct xfs_defer_ops dfops;
xfs_fsblock_t first_block;
xfs_fileoff_t next_fsb = XFS_B_TO_FSB(mp, offset + len);
xfs_fileoff_t shift_fsb = XFS_B_TO_FSB(mp, len);
uint resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0);
@ -1344,22 +1324,16 @@ xfs_collapse_file_space(
goto out_trans_cancel;
xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
xfs_defer_init(&dfops, &first_block);
error = xfs_bmap_collapse_extents(tp, ip, &next_fsb, shift_fsb,
&done, &first_block, &dfops);
&done);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto out_bmap_cancel;
error = xfs_trans_commit(tp);
}
return error;
out_bmap_cancel:
xfs_defer_cancel(&dfops);
out_trans_cancel:
xfs_trans_cancel(tp);
return error;
@ -1386,8 +1360,6 @@ xfs_insert_file_space(
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp;
int error;
struct xfs_defer_ops dfops;
xfs_fsblock_t first_block;
xfs_fileoff_t stop_fsb = XFS_B_TO_FSB(mp, offset);
xfs_fileoff_t next_fsb = NULLFSBLOCK;
xfs_fileoff_t shift_fsb = XFS_B_TO_FSB(mp, len);
@ -1423,22 +1395,17 @@ xfs_insert_file_space(
xfs_ilock(ip, XFS_ILOCK_EXCL);
xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
xfs_defer_init(&dfops, &first_block);
error = xfs_bmap_insert_extents(tp, ip, &next_fsb, shift_fsb,
&done, stop_fsb, &first_block, &dfops);
&done, stop_fsb);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto out_bmap_cancel;
error = xfs_trans_commit(tp);
}
return error;
out_bmap_cancel:
xfs_defer_cancel(&dfops);
out_trans_cancel:
xfs_trans_cancel(tp);
return error;
}
@ -1566,14 +1533,13 @@ xfs_swap_extent_rmap(
struct xfs_inode *ip,
struct xfs_inode *tip)
{
struct xfs_trans *tp = *tpp;
struct xfs_bmbt_irec irec;
struct xfs_bmbt_irec uirec;
struct xfs_bmbt_irec tirec;
xfs_fileoff_t offset_fsb;
xfs_fileoff_t end_fsb;
xfs_filblks_t count_fsb;
xfs_fsblock_t firstfsb;
struct xfs_defer_ops dfops;
int error;
xfs_filblks_t ilen;
xfs_filblks_t rlen;
@ -1609,7 +1575,7 @@ xfs_swap_extent_rmap(
/* Unmap the old blocks in the source file. */
while (tirec.br_blockcount) {
xfs_defer_init(&dfops, &firstfsb);
ASSERT(tp->t_firstblock == NULLFSBLOCK);
trace_xfs_swap_extent_rmap_remap_piece(tip, &tirec);
/* Read extent from the source file */
@ -1631,33 +1597,29 @@ xfs_swap_extent_rmap(
trace_xfs_swap_extent_rmap_remap_piece(tip, &uirec);
/* Remove the mapping from the donor file. */
error = xfs_bmap_unmap_extent((*tpp)->t_mountp, &dfops,
tip, &uirec);
error = xfs_bmap_unmap_extent(tp, tip, &uirec);
if (error)
goto out_defer;
/* Remove the mapping from the source file. */
error = xfs_bmap_unmap_extent((*tpp)->t_mountp, &dfops,
ip, &irec);
error = xfs_bmap_unmap_extent(tp, ip, &irec);
if (error)
goto out_defer;
/* Map the donor file's blocks into the source file. */
error = xfs_bmap_map_extent((*tpp)->t_mountp, &dfops,
ip, &uirec);
error = xfs_bmap_map_extent(tp, ip, &uirec);
if (error)
goto out_defer;
/* Map the source file's blocks into the donor file. */
error = xfs_bmap_map_extent((*tpp)->t_mountp, &dfops,
tip, &irec);
error = xfs_bmap_map_extent(tp, tip, &irec);
if (error)
goto out_defer;
xfs_defer_ijoin(&dfops, ip);
error = xfs_defer_finish(tpp, &dfops);
error = xfs_defer_finish(tpp);
tp = *tpp;
if (error)
goto out_defer;
goto out;
tirec.br_startoff += rlen;
if (tirec.br_startblock != HOLESTARTBLOCK &&
@ -1675,7 +1637,7 @@ xfs_swap_extent_rmap(
return 0;
out_defer:
xfs_defer_cancel(&dfops);
xfs_defer_cancel(tp);
out:
trace_xfs_swap_extent_rmap_error(ip, error, _RET_IP_);
tip->i_d.di_flags2 = tip_flags2;
@ -1691,7 +1653,6 @@ xfs_swap_extent_forks(
int *src_log_flags,
int *target_log_flags)
{
struct xfs_ifork tempifp, *ifp, *tifp;
xfs_filblks_t aforkblks = 0;
xfs_filblks_t taforkblks = 0;
xfs_extnum_t junk;
@ -1733,11 +1694,7 @@ xfs_swap_extent_forks(
/*
* Swap the data forks of the inodes
*/
ifp = &ip->i_df;
tifp = &tip->i_df;
tempifp = *ifp; /* struct copy */
*ifp = *tifp; /* struct copy */
*tifp = tempifp; /* struct copy */
swap(ip->i_df, tip->i_df);
/*
* Fix the on-disk inode values
@ -1746,13 +1703,8 @@ xfs_swap_extent_forks(
ip->i_d.di_nblocks = tip->i_d.di_nblocks - taforkblks + aforkblks;
tip->i_d.di_nblocks = tmp + taforkblks - aforkblks;
tmp = (uint64_t) ip->i_d.di_nextents;
ip->i_d.di_nextents = tip->i_d.di_nextents;
tip->i_d.di_nextents = tmp;
tmp = (uint64_t) ip->i_d.di_format;
ip->i_d.di_format = tip->i_d.di_format;
tip->i_d.di_format = tmp;
swap(ip->i_d.di_nextents, tip->i_d.di_nextents);
swap(ip->i_d.di_format, tip->i_d.di_format);
/*
* The extents in the source inode could still contain speculative
@ -1846,7 +1798,6 @@ xfs_swap_extents(
int src_log_flags, target_log_flags;
int error = 0;
int lock_flags;
struct xfs_ifork *cowfp;
uint64_t f;
int resblks = 0;
@ -1987,18 +1938,11 @@ xfs_swap_extents(
/* Swap the cow forks. */
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
xfs_extnum_t extnum;
ASSERT(ip->i_cformat == XFS_DINODE_FMT_EXTENTS);
ASSERT(tip->i_cformat == XFS_DINODE_FMT_EXTENTS);
extnum = ip->i_cnextents;
ip->i_cnextents = tip->i_cnextents;
tip->i_cnextents = extnum;
cowfp = ip->i_cowfp;
ip->i_cowfp = tip->i_cowfp;
tip->i_cowfp = cowfp;
swap(ip->i_cnextents, tip->i_cnextents);
swap(ip->i_cowfp, tip->i_cowfp);
if (ip->i_cowfp && ip->i_cowfp->if_bytes)
xfs_inode_set_cowblocks_tag(ip);

Просмотреть файл

@ -34,16 +34,6 @@
static kmem_zone_t *xfs_buf_zone;
#ifdef XFS_BUF_LOCK_TRACKING
# define XB_SET_OWNER(bp) ((bp)->b_last_holder = current->pid)
# define XB_CLEAR_OWNER(bp) ((bp)->b_last_holder = -1)
# define XB_GET_OWNER(bp) ((bp)->b_last_holder)
#else
# define XB_SET_OWNER(bp) do { } while (0)
# define XB_CLEAR_OWNER(bp) do { } while (0)
# define XB_GET_OWNER(bp) do { } while (0)
#endif
#define xb_to_gfp(flags) \
((((flags) & XBF_READ_AHEAD) ? __GFP_NORETRY : GFP_NOFS) | __GFP_NOWARN)
@ -226,7 +216,6 @@ _xfs_buf_alloc(
INIT_LIST_HEAD(&bp->b_li_list);
sema_init(&bp->b_sema, 0); /* held, no waiters */
spin_lock_init(&bp->b_lock);
XB_SET_OWNER(bp);
bp->b_target = target;
bp->b_flags = flags;
@ -757,11 +746,7 @@ _xfs_buf_read(
bp->b_flags &= ~(XBF_WRITE | XBF_ASYNC | XBF_READ_AHEAD);
bp->b_flags |= flags & (XBF_READ | XBF_ASYNC | XBF_READ_AHEAD);
if (flags & XBF_ASYNC) {
xfs_buf_submit(bp);
return 0;
}
return xfs_buf_submit_wait(bp);
return xfs_buf_submit(bp);
}
xfs_buf_t *
@ -846,7 +831,7 @@ xfs_buf_read_uncached(
bp->b_flags |= XBF_READ;
bp->b_ops = ops;
xfs_buf_submit_wait(bp);
xfs_buf_submit(bp);
if (bp->b_error) {
int error = bp->b_error;
xfs_buf_relse(bp);
@ -1095,12 +1080,10 @@ xfs_buf_trylock(
int locked;
locked = down_trylock(&bp->b_sema) == 0;
if (locked) {
XB_SET_OWNER(bp);
if (locked)
trace_xfs_buf_trylock(bp, _RET_IP_);
} else {
else
trace_xfs_buf_trylock_fail(bp, _RET_IP_);
}
return locked;
}
@ -1122,7 +1105,6 @@ xfs_buf_lock(
if (atomic_read(&bp->b_pin_count) && (bp->b_flags & XBF_STALE))
xfs_log_force(bp->b_target->bt_mount, 0);
down(&bp->b_sema);
XB_SET_OWNER(bp);
trace_xfs_buf_lock_done(bp, _RET_IP_);
}
@ -1133,9 +1115,7 @@ xfs_buf_unlock(
{
ASSERT(xfs_buf_islocked(bp));
XB_CLEAR_OWNER(bp);
up(&bp->b_sema);
trace_xfs_buf_unlock(bp, _RET_IP_);
}
@ -1249,7 +1229,7 @@ xfs_bwrite(
bp->b_flags &= ~(XBF_ASYNC | XBF_READ | _XBF_DELWRI_Q |
XBF_WRITE_FAIL | XBF_DONE);
error = xfs_buf_submit_wait(bp);
error = xfs_buf_submit(bp);
if (error) {
xfs_force_shutdown(bp->b_target->bt_mount,
SHUTDOWN_META_IO_ERROR);
@ -1453,52 +1433,69 @@ _xfs_buf_ioapply(
}
/*
* Asynchronous IO submission path. This transfers the buffer lock ownership and
* the current reference to the IO. It is not safe to reference the buffer after
* a call to this function unless the caller holds an additional reference
* itself.
* Wait for I/O completion of a sync buffer and return the I/O error code.
*/
void
xfs_buf_submit(
static int
xfs_buf_iowait(
struct xfs_buf *bp)
{
ASSERT(!(bp->b_flags & XBF_ASYNC));
trace_xfs_buf_iowait(bp, _RET_IP_);
wait_for_completion(&bp->b_iowait);
trace_xfs_buf_iowait_done(bp, _RET_IP_);
return bp->b_error;
}
/*
* Buffer I/O submission path, read or write. Asynchronous submission transfers
* the buffer lock ownership and the current reference to the IO. It is not
* safe to reference the buffer after a call to this function unless the caller
* holds an additional reference itself.
*/
int
__xfs_buf_submit(
struct xfs_buf *bp,
bool wait)
{
int error = 0;
trace_xfs_buf_submit(bp, _RET_IP_);
ASSERT(!(bp->b_flags & _XBF_DELWRI_Q));
ASSERT(bp->b_flags & XBF_ASYNC);
/* on shutdown we stale and complete the buffer immediately */
if (XFS_FORCED_SHUTDOWN(bp->b_target->bt_mount)) {
xfs_buf_ioerror(bp, -EIO);
bp->b_flags &= ~XBF_DONE;
xfs_buf_stale(bp);
xfs_buf_ioend(bp);
return;
if (bp->b_flags & XBF_ASYNC)
xfs_buf_ioend(bp);
return -EIO;
}
/*
* Grab a reference so the buffer does not go away underneath us. For
* async buffers, I/O completion drops the callers reference, which
* could occur before submission returns.
*/
xfs_buf_hold(bp);
if (bp->b_flags & XBF_WRITE)
xfs_buf_wait_unpin(bp);
/* clear the internal error state to avoid spurious errors */
bp->b_io_error = 0;
/*
* The caller's reference is released during I/O completion.
* This occurs some time after the last b_io_remaining reference is
* released, so after we drop our Io reference we have to have some
* other reference to ensure the buffer doesn't go away from underneath
* us. Take a direct reference to ensure we have safe access to the
* buffer until we are finished with it.
*/
xfs_buf_hold(bp);
/*
* Set the count to 1 initially, this will stop an I/O completion
* callout which happens before we have started all the I/O from calling
* xfs_buf_ioend too early.
*/
atomic_set(&bp->b_io_remaining, 1);
xfs_buf_ioacct_inc(bp);
if (bp->b_flags & XBF_ASYNC)
xfs_buf_ioacct_inc(bp);
_xfs_buf_ioapply(bp);
/*
@ -1507,74 +1504,19 @@ xfs_buf_submit(
* that we don't return to the caller with completion still pending.
*/
if (atomic_dec_and_test(&bp->b_io_remaining) == 1) {
if (bp->b_error)
if (bp->b_error || !(bp->b_flags & XBF_ASYNC))
xfs_buf_ioend(bp);
else
xfs_buf_ioend_async(bp);
}
xfs_buf_rele(bp);
/* Note: it is not safe to reference bp now we've dropped our ref */
}
/*
* Synchronous buffer IO submission path, read or write.
*/
int
xfs_buf_submit_wait(
struct xfs_buf *bp)
{
int error;
trace_xfs_buf_submit_wait(bp, _RET_IP_);
ASSERT(!(bp->b_flags & (_XBF_DELWRI_Q | XBF_ASYNC)));
if (XFS_FORCED_SHUTDOWN(bp->b_target->bt_mount)) {
xfs_buf_ioerror(bp, -EIO);
xfs_buf_stale(bp);
bp->b_flags &= ~XBF_DONE;
return -EIO;
}
if (bp->b_flags & XBF_WRITE)
xfs_buf_wait_unpin(bp);
/* clear the internal error state to avoid spurious errors */
bp->b_io_error = 0;
if (wait)
error = xfs_buf_iowait(bp);
/*
* For synchronous IO, the IO does not inherit the submitters reference
* count, nor the buffer lock. Hence we cannot release the reference we
* are about to take until we've waited for all IO completion to occur,
* including any xfs_buf_ioend_async() work that may be pending.
*/
xfs_buf_hold(bp);
/*
* Set the count to 1 initially, this will stop an I/O completion
* callout which happens before we have started all the I/O from calling
* xfs_buf_ioend too early.
*/
atomic_set(&bp->b_io_remaining, 1);
_xfs_buf_ioapply(bp);
/*
* make sure we run completion synchronously if it raced with us and is
* already complete.
*/
if (atomic_dec_and_test(&bp->b_io_remaining) == 1)
xfs_buf_ioend(bp);
/* wait for completion before gathering the error from the buffer */
trace_xfs_buf_iowait(bp, _RET_IP_);
wait_for_completion(&bp->b_iowait);
trace_xfs_buf_iowait_done(bp, _RET_IP_);
error = bp->b_error;
/*
* all done now, we can release the hold that keeps the buffer
* referenced for the entire IO.
* Release the hold that keeps the buffer referenced for the entire
* I/O. Note that if the buffer is async, it is not safe to reference
* after this release.
*/
xfs_buf_rele(bp);
return error;
@ -1972,16 +1914,11 @@ xfs_buf_cmp(
}
/*
* submit buffers for write.
*
* When we have a large buffer list, we do not want to hold all the buffers
* locked while we block on the request queue waiting for IO dispatch. To avoid
* this problem, we lock and submit buffers in groups of 50, thereby minimising
* the lock hold times for lists which may contain thousands of objects.
*
* To do this, we sort the buffer list before we walk the list to lock and
* submit buffers, and we plug and unplug around each group of buffers we
* submit.
* Submit buffers for write. If wait_list is specified, the buffers are
* submitted using sync I/O and placed on the wait list such that the caller can
* iowait each buffer. Otherwise async I/O is used and the buffers are released
* at I/O completion time. In either case, buffers remain locked until I/O
* completes and the buffer is released from the queue.
*/
static int
xfs_buf_delwri_submit_buffers(
@ -2023,21 +1960,21 @@ xfs_buf_delwri_submit_buffers(
trace_xfs_buf_delwri_split(bp, _RET_IP_);
/*
* We do all IO submission async. This means if we need
* to wait for IO completion we need to take an extra
* reference so the buffer is still valid on the other
* side. We need to move the buffer onto the io_list
* at this point so the caller can still access it.
* If we have a wait list, each buffer (and associated delwri
* queue reference) transfers to it and is submitted
* synchronously. Otherwise, drop the buffer from the delwri
* queue and submit async.
*/
bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_WRITE_FAIL);
bp->b_flags |= XBF_WRITE | XBF_ASYNC;
bp->b_flags |= XBF_WRITE;
if (wait_list) {
xfs_buf_hold(bp);
bp->b_flags &= ~XBF_ASYNC;
list_move_tail(&bp->b_list, wait_list);
} else
} else {
bp->b_flags |= XBF_ASYNC;
list_del_init(&bp->b_list);
xfs_buf_submit(bp);
}
__xfs_buf_submit(bp, false);
}
blk_finish_plug(&plug);
@ -2084,9 +2021,11 @@ xfs_buf_delwri_submit(
list_del_init(&bp->b_list);
/* locking the buffer will wait for async IO completion. */
xfs_buf_lock(bp);
error2 = bp->b_error;
/*
* Wait on the locked buffer, check for errors and unlock and
* release the delwri queue reference.
*/
error2 = xfs_buf_iowait(bp);
xfs_buf_relse(bp);
if (!error)
error = error2;
@ -2132,23 +2071,18 @@ xfs_buf_delwri_pushbuf(
/*
* Delwri submission clears the DELWRI_Q buffer flag and returns with
* the buffer on the wait list with an associated reference. Rather than
* the buffer on the wait list with the original reference. Rather than
* bounce the buffer from a local wait list back to the original list
* after I/O completion, reuse the original list as the wait list.
*/
xfs_buf_delwri_submit_buffers(&submit_list, buffer_list);
/*
* The buffer is now under I/O and wait listed as during typical delwri
* submission. Lock the buffer to wait for I/O completion. Rather than
* remove the buffer from the wait list and release the reference, we
* want to return with the buffer queued to the original list. The
* buffer already sits on the original list with a wait list reference,
* however. If we let the queue inherit that wait list reference, all we
* need to do is reset the DELWRI_Q flag.
* The buffer is now locked, under I/O and wait listed on the original
* delwri queue. Wait for I/O completion, restore the DELWRI_Q flag and
* return with the buffer unlocked and on the original queue.
*/
xfs_buf_lock(bp);
error = bp->b_error;
error = xfs_buf_iowait(bp);
bp->b_flags |= _XBF_DELWRI_Q;
xfs_buf_unlock(bp);

Просмотреть файл

@ -12,7 +12,6 @@
#include <linux/mm.h>
#include <linux/fs.h>
#include <linux/dax.h>
#include <linux/buffer_head.h>
#include <linux/uio.h>
#include <linux/list_lru.h>
@ -199,10 +198,6 @@ typedef struct xfs_buf {
int b_last_error;
const struct xfs_buf_ops *b_ops;
#ifdef XFS_BUF_LOCK_TRACKING
int b_last_holder;
#endif
} xfs_buf_t;
/* Finding and Reading Buffers */
@ -298,8 +293,14 @@ extern void __xfs_buf_ioerror(struct xfs_buf *bp, int error,
xfs_failaddr_t failaddr);
#define xfs_buf_ioerror(bp, err) __xfs_buf_ioerror((bp), (err), __this_address)
extern void xfs_buf_ioerror_alert(struct xfs_buf *, const char *func);
extern void xfs_buf_submit(struct xfs_buf *bp);
extern int xfs_buf_submit_wait(struct xfs_buf *bp);
extern int __xfs_buf_submit(struct xfs_buf *bp, bool);
static inline int xfs_buf_submit(struct xfs_buf *bp)
{
bool wait = bp->b_flags & XBF_ASYNC ? false : true;
return __xfs_buf_submit(bp, wait);
}
extern void xfs_buf_iomove(xfs_buf_t *, size_t, size_t, void *,
xfs_buf_rw_t);
#define xfs_buf_zero(bp, off, len) \

Просмотреть файл

@ -128,7 +128,7 @@ next_extent:
}
out_del_cursor:
xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_btree_del_cursor(cur, error);
xfs_buf_relse(agbp);
out_put_perag:
xfs_perag_put(pag);

Просмотреть файл

@ -286,17 +286,15 @@ xfs_dquot_disk_alloc(
struct xfs_buf **bpp)
{
struct xfs_bmbt_irec map;
struct xfs_defer_ops dfops;
struct xfs_mount *mp = (*tpp)->t_mountp;
struct xfs_trans *tp = *tpp;
struct xfs_mount *mp = tp->t_mountp;
struct xfs_buf *bp;
struct xfs_inode *quotip = xfs_quota_inode(mp, dqp->dq_flags);
xfs_fsblock_t firstblock;
int nmaps = 1;
int error;
trace_xfs_dqalloc(dqp);
xfs_defer_init(&dfops, &firstblock);
xfs_ilock(quotip, XFS_ILOCK_EXCL);
if (!xfs_this_quota_on(dqp->q_mount, dqp->dq_flags)) {
/*
@ -308,13 +306,12 @@ xfs_dquot_disk_alloc(
}
/* Create the block mapping. */
xfs_trans_ijoin(*tpp, quotip, XFS_ILOCK_EXCL);
error = xfs_bmapi_write(*tpp, quotip, dqp->q_fileoffset,
xfs_trans_ijoin(tp, quotip, XFS_ILOCK_EXCL);
error = xfs_bmapi_write(tp, quotip, dqp->q_fileoffset,
XFS_DQUOT_CLUSTER_SIZE_FSB, XFS_BMAPI_METADATA,
&firstblock, XFS_QM_DQALLOC_SPACE_RES(mp),
&map, &nmaps, &dfops);
XFS_QM_DQALLOC_SPACE_RES(mp), &map, &nmaps);
if (error)
goto error0;
return error;
ASSERT(map.br_blockcount == XFS_DQUOT_CLUSTER_SIZE_FSB);
ASSERT(nmaps == 1);
ASSERT((map.br_startblock != DELAYSTARTBLOCK) &&
@ -326,19 +323,17 @@ xfs_dquot_disk_alloc(
dqp->q_blkno = XFS_FSB_TO_DADDR(mp, map.br_startblock);
/* now we can just get the buffer (there's nothing to read yet) */
bp = xfs_trans_get_buf(*tpp, mp->m_ddev_targp, dqp->q_blkno,
bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, dqp->q_blkno,
mp->m_quotainfo->qi_dqchunklen, 0);
if (!bp) {
error = -ENOMEM;
goto error1;
}
if (!bp)
return -ENOMEM;
bp->b_ops = &xfs_dquot_buf_ops;
/*
* Make a chunk of dquots out of this buffer and log
* the entire thing.
*/
xfs_qm_init_dquot_blk(*tpp, mp, be32_to_cpu(dqp->q_core.d_id),
xfs_qm_init_dquot_blk(tp, mp, be32_to_cpu(dqp->q_core.d_id),
dqp->dq_flags & XFS_DQ_ALLTYPES, bp);
xfs_buf_set_ref(bp, XFS_DQUOT_REF);
@ -352,10 +347,8 @@ xfs_dquot_disk_alloc(
* the buffer locked across the _defer_finish call. We can now do
* this correctly with xfs_defer_bjoin.
*
* Above, we allocated a disk block for the dquot information and
* used get_buf to initialize the dquot. If the _defer_bjoin fails,
* the buffer is still locked to *tpp, so we must _bhold_release and
* then _trans_brelse the buffer. If the _defer_finish fails, the old
* Above, we allocated a disk block for the dquot information and used
* get_buf to initialize the dquot. If the _defer_finish fails, the old
* transaction is gone but the new buffer is not joined or held to any
* transaction, so we must _buf_relse it.
*
@ -364,25 +357,15 @@ xfs_dquot_disk_alloc(
* is responsible for unlocking any buffer passed back, either
* manually or by committing the transaction.
*/
xfs_trans_bhold(*tpp, bp);
error = xfs_defer_bjoin(&dfops, bp);
if (error) {
xfs_trans_bhold_release(*tpp, bp);
xfs_trans_brelse(*tpp, bp);
goto error1;
}
error = xfs_defer_finish(tpp, &dfops);
xfs_trans_bhold(tp, bp);
error = xfs_defer_finish(tpp);
tp = *tpp;
if (error) {
xfs_buf_relse(bp);
goto error1;
return error;
}
*bpp = bp;
return 0;
error1:
xfs_defer_cancel(&dfops);
error0:
return error;
}
/*

Просмотреть файл

@ -50,6 +50,7 @@ static unsigned int xfs_errortag_random_default[] = {
XFS_RANDOM_LOG_ITEM_PIN,
XFS_RANDOM_BUF_LRU_REF,
XFS_RANDOM_FORCE_SCRUB_REPAIR,
XFS_RANDOM_FORCE_SUMMARY_RECALC,
};
struct xfs_errortag_attr {
@ -157,6 +158,7 @@ XFS_ERRORTAG_ATTR_RW(log_bad_crc, XFS_ERRTAG_LOG_BAD_CRC);
XFS_ERRORTAG_ATTR_RW(log_item_pin, XFS_ERRTAG_LOG_ITEM_PIN);
XFS_ERRORTAG_ATTR_RW(buf_lru_ref, XFS_ERRTAG_BUF_LRU_REF);
XFS_ERRORTAG_ATTR_RW(force_repair, XFS_ERRTAG_FORCE_SCRUB_REPAIR);
XFS_ERRORTAG_ATTR_RW(bad_summary, XFS_ERRTAG_FORCE_SUMMARY_RECALC);
static struct attribute *xfs_errortag_attrs[] = {
XFS_ERRORTAG_ATTR_LIST(noerror),
@ -192,6 +194,7 @@ static struct attribute *xfs_errortag_attrs[] = {
XFS_ERRORTAG_ATTR_LIST(log_item_pin),
XFS_ERRORTAG_ATTR_LIST(buf_lru_ref),
XFS_ERRORTAG_ATTR_LIST(force_repair),
XFS_ERRORTAG_ATTR_LIST(bad_summary),
NULL,
};

Просмотреть файл

@ -150,7 +150,7 @@ xfs_nfs_get_inode(
}
if (VFS_I(ip)->i_generation != generation) {
IRELE(ip);
xfs_irele(ip);
return ERR_PTR(-ESTALE);
}

Просмотреть файл

@ -721,12 +721,10 @@ xfs_file_write_iter(
static void
xfs_wait_dax_page(
struct inode *inode,
bool *did_unlock)
struct inode *inode)
{
struct xfs_inode *ip = XFS_I(inode);
*did_unlock = true;
xfs_iunlock(ip, XFS_MMAPLOCK_EXCL);
schedule();
xfs_ilock(ip, XFS_MMAPLOCK_EXCL);
@ -735,8 +733,7 @@ xfs_wait_dax_page(
static int
xfs_break_dax_layouts(
struct inode *inode,
uint iolock,
bool *did_unlock)
bool *retry)
{
struct page *page;
@ -746,9 +743,10 @@ xfs_break_dax_layouts(
if (!page)
return 0;
*retry = true;
return ___wait_var_event(&page->_refcount,
atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE,
0, 0, xfs_wait_dax_page(inode, did_unlock));
0, 0, xfs_wait_dax_page(inode));
}
int
@ -766,7 +764,7 @@ xfs_break_layouts(
retry = false;
switch (reason) {
case BREAK_UNMAP:
error = xfs_break_dax_layouts(inode, *iolock, &retry);
error = xfs_break_dax_layouts(inode, &retry);
if (error || retry)
break;
/* fall through */

Просмотреть файл

@ -19,6 +19,8 @@
#include "xfs_filestream.h"
#include "xfs_trace.h"
#include "xfs_ag_resv.h"
#include "xfs_trans.h"
#include "xfs_shared.h"
struct xfs_fstrm_item {
struct xfs_mru_cache_elem mru;
@ -339,7 +341,7 @@ xfs_filestream_lookup_ag(
if (xfs_filestream_pick_ag(pip, startag, &ag, 0, 0))
ag = NULLAGNUMBER;
out:
IRELE(pip);
xfs_irele(pip);
return ag;
}
@ -377,7 +379,7 @@ xfs_filestream_new_ag(
if (xfs_alloc_is_userdata(ap->datatype))
flags |= XFS_PICK_USERDATA;
if (ap->dfops->dop_low)
if (ap->tp->t_flags & XFS_TRANS_LOWMODE)
flags |= XFS_PICK_LOWSPACE;
err = xfs_filestream_pick_ag(pip, startag, agp, flags, minlen);
@ -388,7 +390,7 @@ xfs_filestream_new_ag(
if (mru)
xfs_fstrm_free_func(mp, mru);
IRELE(pip);
xfs_irele(pip);
exit:
if (*agp == NULLAGNUMBER)
*agp = 0;

Просмотреть файл

@ -214,12 +214,12 @@ xfs_getfsmap_is_shared(
/* Are there any shared blocks here? */
flen = 0;
cur = xfs_refcountbt_init_cursor(mp, tp, info->agf_bp,
info->agno, NULL);
info->agno);
error = xfs_refcount_find_shared(cur, rec->rm_startblock,
rec->rm_blockcount, &fbno, &flen, false);
xfs_btree_del_cursor(cur, error ? XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_btree_del_cursor(cur, error);
if (error)
return error;

Просмотреть файл

@ -536,7 +536,7 @@ xfs_fs_reserve_ag_blocks(
for (agno = 0; agno < mp->m_sb.sb_agcount; agno++) {
pag = xfs_perag_get(mp, agno);
err2 = xfs_ag_resv_init(pag);
err2 = xfs_ag_resv_init(pag, NULL);
xfs_perag_put(pag);
if (err2 && !error)
error = err2;

Просмотреть файл

@ -66,7 +66,7 @@ xfs_inode_alloc(
ip->i_cowfp = NULL;
ip->i_cnextents = 0;
ip->i_cformat = XFS_DINODE_FMT_EXTENTS;
memset(&ip->i_df, 0, sizeof(xfs_ifork_t));
memset(&ip->i_df, 0, sizeof(ip->i_df));
ip->i_flags = 0;
ip->i_delayed_blks = 0;
memset(&ip->i_d, 0, sizeof(ip->i_d));
@ -716,7 +716,7 @@ xfs_icache_inode_is_allocated(
return error;
*inuse = !!(VFS_I(ip)->i_mode);
IRELE(ip);
xfs_irele(ip);
return 0;
}
@ -856,7 +856,7 @@ restart:
xfs_iflags_test(batch[i], XFS_INEW))
xfs_inew_wait(batch[i]);
error = execute(batch[i], flags, args);
IRELE(batch[i]);
xfs_irele(batch[i]);
if (error == -EAGAIN) {
skipped++;
continue;
@ -1697,14 +1697,13 @@ xfs_inode_clear_eofblocks_tag(
*/
static bool
xfs_prep_free_cowblocks(
struct xfs_inode *ip,
struct xfs_ifork *ifp)
struct xfs_inode *ip)
{
/*
* Just clear the tag if we have an empty cow fork or none at all. It's
* possible the inode was fully unshared since it was originally tagged.
*/
if (!xfs_is_reflink_inode(ip) || !ifp->if_bytes) {
if (!xfs_inode_has_cow_data(ip)) {
trace_xfs_inode_free_cowblocks_invalid(ip);
xfs_inode_clear_cowblocks_tag(ip);
return false;
@ -1742,11 +1741,10 @@ xfs_inode_free_cowblocks(
void *args)
{
struct xfs_eofblocks *eofb = args;
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, XFS_COW_FORK);
int match;
int ret = 0;
if (!xfs_prep_free_cowblocks(ip, ifp))
if (!xfs_prep_free_cowblocks(ip))
return 0;
if (eofb) {
@ -1771,7 +1769,7 @@ xfs_inode_free_cowblocks(
* Check again, nobody else should be able to dirty blocks or change
* the reflink iflag now that we have the first two locks held.
*/
if (xfs_prep_free_cowblocks(ip, ifp))
if (xfs_prep_free_cowblocks(ip))
ret = xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, false);
xfs_iunlock(ip, XFS_MMAPLOCK_EXCL);

Просмотреть файл

@ -927,7 +927,7 @@ xfs_ialloc(
case S_IFLNK:
ip->i_d.di_format = XFS_DINODE_FMT_EXTENTS;
ip->i_df.if_flags = XFS_IFEXTENTS;
ip->i_df.if_bytes = ip->i_df.if_real_bytes = 0;
ip->i_df.if_bytes = 0;
ip->i_df.if_u1.if_root = NULL;
break;
default:
@ -1142,8 +1142,6 @@ xfs_create(
struct xfs_inode *ip = NULL;
struct xfs_trans *tp = NULL;
int error;
struct xfs_defer_ops dfops;
xfs_fsblock_t first_block;
bool unlock_dp_on_error = false;
prid_t prid;
struct xfs_dquot *udqp = NULL;
@ -1195,9 +1193,6 @@ xfs_create(
xfs_ilock(dp, XFS_ILOCK_EXCL | XFS_ILOCK_PARENT);
unlock_dp_on_error = true;
xfs_defer_init(&dfops, &first_block);
tp->t_agfl_dfops = &dfops;
/*
* Reserve disk quota and the inode.
*/
@ -1226,7 +1221,7 @@ xfs_create(
unlock_dp_on_error = false;
error = xfs_dir_createname(tp, dp, name, ip->i_ino,
&first_block, &dfops, resblks ?
resblks ?
resblks - XFS_IALLOC_SPACE_RES(mp) : 0);
if (error) {
ASSERT(error != -ENOSPC);
@ -1238,11 +1233,11 @@ xfs_create(
if (is_dir) {
error = xfs_dir_init(tp, ip, dp);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
error = xfs_bumplink(tp, dp);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
}
/*
@ -1260,10 +1255,6 @@ xfs_create(
*/
xfs_qm_vop_create_dqattach(tp, ip, udqp, gdqp, pdqp);
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto out_bmap_cancel;
error = xfs_trans_commit(tp);
if (error)
goto out_release_inode;
@ -1275,8 +1266,6 @@ xfs_create(
*ipp = ip;
return 0;
out_bmap_cancel:
xfs_defer_cancel(&dfops);
out_trans_cancel:
xfs_trans_cancel(tp);
out_release_inode:
@ -1287,7 +1276,7 @@ xfs_create(
*/
if (ip) {
xfs_finish_inode_setup(ip);
IRELE(ip);
xfs_irele(ip);
}
xfs_qm_dqrele(udqp);
@ -1382,7 +1371,7 @@ xfs_create_tmpfile(
*/
if (ip) {
xfs_finish_inode_setup(ip);
IRELE(ip);
xfs_irele(ip);
}
xfs_qm_dqrele(udqp);
@ -1401,8 +1390,6 @@ xfs_link(
xfs_mount_t *mp = tdp->i_mount;
xfs_trans_t *tp;
int error;
struct xfs_defer_ops dfops;
xfs_fsblock_t first_block;
int resblks;
trace_xfs_link(tdp, target_name);
@ -1451,9 +1438,6 @@ xfs_link(
goto error_return;
}
xfs_defer_init(&dfops, &first_block);
tp->t_agfl_dfops = &dfops;
/*
* Handle initial link state of O_TMPFILE inode
*/
@ -1464,7 +1448,7 @@ xfs_link(
}
error = xfs_dir_createname(tp, tdp, target_name, sip->i_ino,
&first_block, &dfops, resblks);
resblks);
if (error)
goto error_return;
xfs_trans_ichgtime(tp, tdp, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
@ -1482,12 +1466,6 @@ xfs_link(
if (mp->m_flags & (XFS_MOUNT_WSYNC|XFS_MOUNT_DIRSYNC))
xfs_trans_set_sync(tp);
error = xfs_defer_finish(&tp, &dfops);
if (error) {
xfs_defer_cancel(&dfops);
goto error_return;
}
return xfs_trans_commit(tp);
error_return:
@ -1545,8 +1523,6 @@ xfs_itruncate_extents_flags(
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp = *tpp;
struct xfs_defer_ops dfops;
xfs_fsblock_t first_block;
xfs_fileoff_t first_unmap_block;
xfs_fileoff_t last_block;
xfs_filblks_t unmap_len;
@ -1583,10 +1559,9 @@ xfs_itruncate_extents_flags(
ASSERT(first_unmap_block < last_block);
unmap_len = last_block - first_unmap_block + 1;
while (!done) {
xfs_defer_init(&dfops, &first_block);
ASSERT(tp->t_firstblock == NULLFSBLOCK);
error = xfs_bunmapi(tp, ip, first_unmap_block, unmap_len, flags,
XFS_ITRUNC_MAX_EXTENTS, &first_block,
&dfops, &done);
XFS_ITRUNC_MAX_EXTENTS, &done);
if (error)
goto out_bmap_cancel;
@ -1594,10 +1569,9 @@ xfs_itruncate_extents_flags(
* Duplicate the transaction that has the permanent
* reservation and commit the old transaction.
*/
xfs_defer_ijoin(&dfops, ip);
error = xfs_defer_finish(&tp, &dfops);
error = xfs_defer_finish(&tp);
if (error)
goto out_bmap_cancel;
goto out;
error = xfs_trans_roll_inode(&tp, ip);
if (error)
@ -1631,7 +1605,7 @@ out_bmap_cancel:
* the transaction can be properly aborted. We just need to make sure
* we're not holding any resources that we were not when we came in.
*/
xfs_defer_cancel(&dfops);
xfs_defer_cancel(tp);
goto out;
}
@ -1733,7 +1707,6 @@ xfs_inactive_truncate(
ASSERT(XFS_FORCED_SHUTDOWN(mp));
return error;
}
xfs_ilock(ip, XFS_ILOCK_EXCL);
xfs_trans_ijoin(tp, ip, 0);
@ -1774,8 +1747,6 @@ STATIC int
xfs_inactive_ifree(
struct xfs_inode *ip)
{
struct xfs_defer_ops dfops;
xfs_fsblock_t first_block;
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp;
int error;
@ -1812,9 +1783,7 @@ xfs_inactive_ifree(
xfs_ilock(ip, XFS_ILOCK_EXCL);
xfs_trans_ijoin(tp, ip, 0);
xfs_defer_init(&dfops, &first_block);
tp->t_agfl_dfops = &dfops;
error = xfs_ifree(tp, ip, &dfops);
error = xfs_ifree(tp, ip);
if (error) {
/*
* If we fail to free the inode, shut down. The cancel
@ -1840,12 +1809,6 @@ xfs_inactive_ifree(
* Just ignore errors at this point. There is nothing we can do except
* to try to keep going. Make sure it's not a silent error.
*/
error = xfs_defer_finish(&tp, &dfops);
if (error) {
xfs_notice(mp, "%s: xfs_defer_finish returned error %d",
__func__, error);
xfs_defer_cancel(&dfops);
}
error = xfs_trans_commit(tp);
if (error)
xfs_notice(mp, "%s: xfs_trans_commit returned error %d",
@ -1868,7 +1831,6 @@ xfs_inactive(
xfs_inode_t *ip)
{
struct xfs_mount *mp;
struct xfs_ifork *cow_ifp = XFS_IFORK_PTR(ip, XFS_COW_FORK);
int error;
int truncate = 0;
@ -1877,7 +1839,6 @@ xfs_inactive(
* to clean up here.
*/
if (VFS_I(ip)->i_mode == 0) {
ASSERT(ip->i_df.if_real_bytes == 0);
ASSERT(ip->i_df.if_broot_bytes == 0);
return;
}
@ -1890,7 +1851,7 @@ xfs_inactive(
return;
/* Try to clean out the cow blocks if there are any. */
if (xfs_is_reflink_inode(ip) && cow_ifp->if_bytes > 0)
if (xfs_inode_has_cow_data(ip))
xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, true);
if (VFS_I(ip)->i_nlink != 0) {
@ -2445,9 +2406,8 @@ xfs_ifree_local_data(
*/
int
xfs_ifree(
xfs_trans_t *tp,
xfs_inode_t *ip,
struct xfs_defer_ops *dfops)
struct xfs_trans *tp,
struct xfs_inode *ip)
{
int error;
struct xfs_icluster xic = { 0 };
@ -2466,7 +2426,7 @@ xfs_ifree(
if (error)
return error;
error = xfs_difree(tp, ip->i_ino, dfops, &xic);
error = xfs_difree(tp, ip->i_ino, &xic);
if (error)
return error;
@ -2577,8 +2537,6 @@ xfs_remove(
xfs_trans_t *tp = NULL;
int is_dir = S_ISDIR(VFS_I(ip)->i_mode);
int error = 0;
struct xfs_defer_ops dfops;
xfs_fsblock_t first_block;
uint resblks;
trace_xfs_remove(dp, name);
@ -2658,13 +2616,10 @@ xfs_remove(
if (error)
goto out_trans_cancel;
xfs_defer_init(&dfops, &first_block);
tp->t_agfl_dfops = &dfops;
error = xfs_dir_removename(tp, dp, name, ip->i_ino,
&first_block, &dfops, resblks);
error = xfs_dir_removename(tp, dp, name, ip->i_ino, resblks);
if (error) {
ASSERT(error != -ENOENT);
goto out_bmap_cancel;
goto out_trans_cancel;
}
/*
@ -2675,10 +2630,6 @@ xfs_remove(
if (mp->m_flags & (XFS_MOUNT_WSYNC|XFS_MOUNT_DIRSYNC))
xfs_trans_set_sync(tp);
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto out_bmap_cancel;
error = xfs_trans_commit(tp);
if (error)
goto std_return;
@ -2688,8 +2639,6 @@ xfs_remove(
return 0;
out_bmap_cancel:
xfs_defer_cancel(&dfops);
out_trans_cancel:
xfs_trans_cancel(tp);
std_return:
@ -2749,11 +2698,8 @@ xfs_sort_for_rename(
static int
xfs_finish_rename(
struct xfs_trans *tp,
struct xfs_defer_ops *dfops)
struct xfs_trans *tp)
{
int error;
/*
* If this is a synchronous mount, make sure that the rename transaction
* goes to disk before returning to the user.
@ -2761,13 +2707,6 @@ xfs_finish_rename(
if (tp->t_mountp->m_flags & (XFS_MOUNT_WSYNC|XFS_MOUNT_DIRSYNC))
xfs_trans_set_sync(tp);
error = xfs_defer_finish(&tp, dfops);
if (error) {
xfs_defer_cancel(dfops);
xfs_trans_cancel(tp);
return error;
}
return xfs_trans_commit(tp);
}
@ -2785,8 +2724,6 @@ xfs_cross_rename(
struct xfs_inode *dp2,
struct xfs_name *name2,
struct xfs_inode *ip2,
struct xfs_defer_ops *dfops,
xfs_fsblock_t *first_block,
int spaceres)
{
int error = 0;
@ -2795,16 +2732,12 @@ xfs_cross_rename(
int dp2_flags = 0;
/* Swap inode number for dirent in first parent */
error = xfs_dir_replace(tp, dp1, name1,
ip2->i_ino,
first_block, dfops, spaceres);
error = xfs_dir_replace(tp, dp1, name1, ip2->i_ino, spaceres);
if (error)
goto out_trans_abort;
/* Swap inode number for dirent in second parent */
error = xfs_dir_replace(tp, dp2, name2,
ip1->i_ino,
first_block, dfops, spaceres);
error = xfs_dir_replace(tp, dp2, name2, ip1->i_ino, spaceres);
if (error)
goto out_trans_abort;
@ -2818,8 +2751,7 @@ xfs_cross_rename(
if (S_ISDIR(VFS_I(ip2)->i_mode)) {
error = xfs_dir_replace(tp, ip2, &xfs_name_dotdot,
dp1->i_ino, first_block,
dfops, spaceres);
dp1->i_ino, spaceres);
if (error)
goto out_trans_abort;
@ -2845,8 +2777,7 @@ xfs_cross_rename(
if (S_ISDIR(VFS_I(ip1)->i_mode)) {
error = xfs_dir_replace(tp, ip1, &xfs_name_dotdot,
dp2->i_ino, first_block,
dfops, spaceres);
dp2->i_ino, spaceres);
if (error)
goto out_trans_abort;
@ -2885,10 +2816,9 @@ xfs_cross_rename(
}
xfs_trans_ichgtime(tp, dp1, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
xfs_trans_log_inode(tp, dp1, XFS_ILOG_CORE);
return xfs_finish_rename(tp, dfops);
return xfs_finish_rename(tp);
out_trans_abort:
xfs_defer_cancel(dfops);
xfs_trans_cancel(tp);
return error;
}
@ -2943,8 +2873,6 @@ xfs_rename(
{
struct xfs_mount *mp = src_dp->i_mount;
struct xfs_trans *tp;
struct xfs_defer_ops dfops;
xfs_fsblock_t first_block;
struct xfs_inode *wip = NULL; /* whiteout inode */
struct xfs_inode *inodes[__XFS_SORT_INODES];
int num_inodes = __XFS_SORT_INODES;
@ -3026,14 +2954,11 @@ xfs_rename(
goto out_trans_cancel;
}
xfs_defer_init(&dfops, &first_block);
tp->t_agfl_dfops = &dfops;
/* RENAME_EXCHANGE is unique from here on. */
if (flags & RENAME_EXCHANGE)
return xfs_cross_rename(tp, src_dp, src_name, src_ip,
target_dp, target_name, target_ip,
&dfops, &first_block, spaceres);
spaceres);
/*
* Set up the target.
@ -3054,10 +2979,9 @@ xfs_rename(
* to account for the ".." reference from the new entry.
*/
error = xfs_dir_createname(tp, target_dp, target_name,
src_ip->i_ino, &first_block,
&dfops, spaceres);
src_ip->i_ino, spaceres);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
xfs_trans_ichgtime(tp, target_dp,
XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
@ -3065,7 +2989,7 @@ xfs_rename(
if (new_parent && src_is_directory) {
error = xfs_bumplink(tp, target_dp);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
}
} else { /* target_ip != NULL */
/*
@ -3094,10 +3018,9 @@ xfs_rename(
* name at the destination directory, remove it first.
*/
error = xfs_dir_replace(tp, target_dp, target_name,
src_ip->i_ino,
&first_block, &dfops, spaceres);
src_ip->i_ino, spaceres);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
xfs_trans_ichgtime(tp, target_dp,
XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
@ -3108,7 +3031,7 @@ xfs_rename(
*/
error = xfs_droplink(tp, target_ip);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
if (src_is_directory) {
/*
@ -3116,7 +3039,7 @@ xfs_rename(
*/
error = xfs_droplink(tp, target_ip);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
}
} /* target_ip != NULL */
@ -3129,11 +3052,10 @@ xfs_rename(
* directory.
*/
error = xfs_dir_replace(tp, src_ip, &xfs_name_dotdot,
target_dp->i_ino,
&first_block, &dfops, spaceres);
target_dp->i_ino, spaceres);
ASSERT(error != -EEXIST);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
}
/*
@ -3159,7 +3081,7 @@ xfs_rename(
*/
error = xfs_droplink(tp, src_dp);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
}
/*
@ -3169,12 +3091,12 @@ xfs_rename(
*/
if (wip) {
error = xfs_dir_replace(tp, src_dp, src_name, wip->i_ino,
&first_block, &dfops, spaceres);
spaceres);
} else
error = xfs_dir_removename(tp, src_dp, src_name, src_ip->i_ino,
&first_block, &dfops, spaceres);
spaceres);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
/*
* For whiteouts, we need to bump the link count on the whiteout inode.
@ -3188,10 +3110,10 @@ xfs_rename(
ASSERT(VFS_I(wip)->i_nlink == 0);
error = xfs_bumplink(tp, wip);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
error = xfs_iunlink_remove(tp, wip);
if (error)
goto out_bmap_cancel;
goto out_trans_cancel;
xfs_trans_log_inode(tp, wip, XFS_ILOG_CORE);
/*
@ -3207,18 +3129,16 @@ xfs_rename(
if (new_parent)
xfs_trans_log_inode(tp, target_dp, XFS_ILOG_CORE);
error = xfs_finish_rename(tp, &dfops);
error = xfs_finish_rename(tp);
if (wip)
IRELE(wip);
xfs_irele(wip);
return error;
out_bmap_cancel:
xfs_defer_cancel(&dfops);
out_trans_cancel:
xfs_trans_cancel(tp);
out_release_wip:
if (wip)
IRELE(wip);
xfs_irele(wip);
return error;
}
@ -3674,3 +3594,12 @@ xfs_iflush_int(
corrupt_out:
return -EFSCORRUPTED;
}
/* Release an inode. */
void
xfs_irele(
struct xfs_inode *ip)
{
trace_xfs_irele(ip, _RET_IP_);
iput(VFS_I(ip));
}

Просмотреть файл

@ -15,7 +15,6 @@
struct xfs_dinode;
struct xfs_inode;
struct xfs_buf;
struct xfs_defer_ops;
struct xfs_bmbt_irec;
struct xfs_inode_log_item;
struct xfs_mount;
@ -34,9 +33,9 @@ typedef struct xfs_inode {
struct xfs_imap i_imap; /* location for xfs_imap() */
/* Extent information. */
xfs_ifork_t *i_afp; /* attribute fork pointer */
xfs_ifork_t *i_cowfp; /* copy on write extents */
xfs_ifork_t i_df; /* data fork */
struct xfs_ifork *i_afp; /* attribute fork pointer */
struct xfs_ifork *i_cowfp; /* copy on write extents */
struct xfs_ifork i_df; /* data fork */
/* operations vectors */
const struct xfs_dir_ops *d_ops; /* directory ops vector */
@ -198,6 +197,15 @@ static inline bool xfs_is_reflink_inode(struct xfs_inode *ip)
return ip->i_d.di_flags2 & XFS_DIFLAG2_REFLINK;
}
/*
* Check if an inode has any data in the COW fork. This might be often false
* even for inodes with the reflink flag when there is no pending COW operation.
*/
static inline bool xfs_inode_has_cow_data(struct xfs_inode *ip)
{
return ip->i_cowfp && ip->i_cowfp->if_bytes;
}
/*
* In-core inode flags.
*/
@ -415,8 +423,7 @@ uint xfs_ilock_data_map_shared(struct xfs_inode *);
uint xfs_ilock_attr_map_shared(struct xfs_inode *);
uint xfs_ip2xflags(struct xfs_inode *);
int xfs_ifree(struct xfs_trans *, xfs_inode_t *,
struct xfs_defer_ops *);
int xfs_ifree(struct xfs_trans *, struct xfs_inode *);
int xfs_itruncate_extents_flags(struct xfs_trans **,
struct xfs_inode *, int, xfs_fsize_t, int);
void xfs_iext_realloc(xfs_inode_t *, int, int);
@ -484,18 +491,7 @@ static inline void xfs_setup_existing_inode(struct xfs_inode *ip)
xfs_finish_inode_setup(ip);
}
#define IHOLD(ip) \
do { \
ASSERT(atomic_read(&VFS_I(ip)->i_count) > 0) ; \
ihold(VFS_I(ip)); \
trace_xfs_ihold(ip, _THIS_IP_); \
} while (0)
#define IRELE(ip) \
do { \
trace_xfs_irele(ip, _THIS_IP_); \
iput(VFS_I(ip)); \
} while (0)
void xfs_irele(struct xfs_inode *ip);
extern struct kmem_zone *xfs_inode_zone;

Просмотреть файл

@ -194,8 +194,6 @@ xfs_inode_item_format_data_fork(
* to be there by xfs_idata_realloc().
*/
data_bytes = roundup(ip->i_df.if_bytes, 4);
ASSERT(ip->i_df.if_real_bytes == 0 ||
ip->i_df.if_real_bytes >= data_bytes);
ASSERT(ip->i_df.if_u1.if_data != NULL);
ASSERT(ip->i_d.di_size > 0);
xlog_copy_iovec(lv, vecp, XLOG_REG_TYPE_ILOCAL,
@ -280,8 +278,6 @@ xfs_inode_item_format_attr_fork(
* to be there by xfs_idata_realloc().
*/
data_bytes = roundup(ip->i_afp->if_bytes, 4);
ASSERT(ip->i_afp->if_real_bytes == 0 ||
ip->i_afp->if_real_bytes >= data_bytes);
ASSERT(ip->i_afp->if_u1.if_data != NULL);
xlog_copy_iovec(lv, vecp, XLOG_REG_TYPE_IATTR_LOCAL,
ip->i_afp->if_u1.if_data,

Просмотреть файл

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2000-2006 Silicon Graphics, Inc.
* Copyright (c) 2016 Christoph Hellwig.
* Copyright (c) 2016-2018 Christoph Hellwig.
* All Rights Reserved.
*/
#include <linux/iomap.h>
@ -152,13 +152,11 @@ xfs_iomap_write_direct(
xfs_fileoff_t offset_fsb;
xfs_fileoff_t last_fsb;
xfs_filblks_t count_fsb, resaligned;
xfs_fsblock_t firstfsb;
xfs_extlen_t extsz;
int nimaps;
int quota_flag;
int rt;
xfs_trans_t *tp;
struct xfs_defer_ops dfops;
uint qblocks, resblks, resrtextents;
int error;
int lockmode;
@ -254,21 +252,15 @@ xfs_iomap_write_direct(
* From this point onwards we overwrite the imap pointer that the
* caller gave to us.
*/
xfs_defer_init(&dfops, &firstfsb);
nimaps = 1;
error = xfs_bmapi_write(tp, ip, offset_fsb, count_fsb,
bmapi_flags, &firstfsb, resblks, imap,
&nimaps, &dfops);
bmapi_flags, resblks, imap, &nimaps);
if (error)
goto out_bmap_cancel;
goto out_res_cancel;
/*
* Complete the transaction
*/
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto out_bmap_cancel;
error = xfs_trans_commit(tp);
if (error)
goto out_unlock;
@ -288,8 +280,7 @@ out_unlock:
xfs_iunlock(ip, lockmode);
return error;
out_bmap_cancel:
xfs_defer_cancel(&dfops);
out_res_cancel:
xfs_trans_unreserve_quota_nblks(tp, ip, (long)qblocks, 0, quota_flag);
out_trans_cancel:
xfs_trans_cancel(tp);
@ -660,13 +651,13 @@ xfs_iomap_write_allocate(
xfs_inode_t *ip,
int whichfork,
xfs_off_t offset,
xfs_bmbt_irec_t *imap)
xfs_bmbt_irec_t *imap,
unsigned int *cow_seq)
{
xfs_mount_t *mp = ip->i_mount;
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork);
xfs_fileoff_t offset_fsb, last_block;
xfs_fileoff_t end_fsb, map_start_fsb;
xfs_fsblock_t first_block;
struct xfs_defer_ops dfops;
xfs_filblks_t count_fsb;
xfs_trans_t *tp;
int nimaps;
@ -716,8 +707,6 @@ xfs_iomap_write_allocate(
xfs_ilock(ip, XFS_ILOCK_EXCL);
xfs_trans_ijoin(tp, ip, 0);
xfs_defer_init(&dfops, &first_block);
/*
* it is possible that the extents have changed since
* we did the read call as we dropped the ilock for a
@ -770,13 +759,8 @@ xfs_iomap_write_allocate(
* pointer that the caller gave to us.
*/
error = xfs_bmapi_write(tp, ip, map_start_fsb,
count_fsb, flags, &first_block,
nres, imap, &nimaps,
&dfops);
if (error)
goto trans_cancel;
error = xfs_defer_finish(&tp, &dfops);
count_fsb, flags, nres, imap,
&nimaps);
if (error)
goto trans_cancel;
@ -784,6 +768,8 @@ xfs_iomap_write_allocate(
if (error)
goto error0;
if (whichfork == XFS_COW_FORK)
*cow_seq = READ_ONCE(ifp->if_seq);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
}
@ -810,7 +796,6 @@ xfs_iomap_write_allocate(
}
trans_cancel:
xfs_defer_cancel(&dfops);
xfs_trans_cancel(tp);
error0:
xfs_iunlock(ip, XFS_ILOCK_EXCL);
@ -828,11 +813,9 @@ xfs_iomap_write_unwritten(
xfs_fileoff_t offset_fsb;
xfs_filblks_t count_fsb;
xfs_filblks_t numblks_fsb;
xfs_fsblock_t firstfsb;
int nimaps;
xfs_trans_t *tp;
xfs_bmbt_irec_t imap;
struct xfs_defer_ops dfops;
struct inode *inode = VFS_I(ip);
xfs_fsize_t i_size;
uint resblks;
@ -877,11 +860,10 @@ xfs_iomap_write_unwritten(
/*
* Modify the unwritten extent state of the buffer.
*/
xfs_defer_init(&dfops, &firstfsb);
nimaps = 1;
error = xfs_bmapi_write(tp, ip, offset_fsb, count_fsb,
XFS_BMAPI_CONVERT, &firstfsb, resblks,
&imap, &nimaps, &dfops);
XFS_BMAPI_CONVERT, resblks, &imap,
&nimaps);
if (error)
goto error_on_bmapi_transaction;
@ -901,10 +883,6 @@ xfs_iomap_write_unwritten(
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
}
error = xfs_defer_finish(&tp, &dfops);
if (error)
goto error_on_bmapi_transaction;
error = xfs_trans_commit(tp);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
if (error)
@ -928,7 +906,6 @@ xfs_iomap_write_unwritten(
return 0;
error_on_bmapi_transaction:
xfs_defer_cancel(&dfops);
xfs_trans_cancel(tp);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
return error;
@ -1032,8 +1009,6 @@ xfs_file_iomap_begin(
if (XFS_FORCED_SHUTDOWN(mp))
return -EIO;
iomap->flags |= IOMAP_F_BUFFER_HEAD;
if (((flags & (IOMAP_WRITE | IOMAP_DIRECT)) == IOMAP_WRITE) &&
!IS_DAX(inode) && !xfs_get_extsz_hint(ip)) {
/* Reserve delalloc blocks for regular writeback. */
@ -1204,11 +1179,8 @@ xfs_file_iomap_end_delalloc(
truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
XFS_FSB_TO_B(mp, end_fsb) - 1);
xfs_ilock(ip, XFS_ILOCK_EXCL);
error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
end_fsb - start_fsb);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
if (error && !XFS_FORCED_SHUTDOWN(mp)) {
xfs_alert(mp, "%s: unable to clean up ino %lld",
__func__, ip->i_ino);

Просмотреть файл

@ -14,7 +14,7 @@ struct xfs_bmbt_irec;
int xfs_iomap_write_direct(struct xfs_inode *, xfs_off_t, size_t,
struct xfs_bmbt_irec *, int);
int xfs_iomap_write_allocate(struct xfs_inode *, int, xfs_off_t,
struct xfs_bmbt_irec *);
struct xfs_bmbt_irec *, unsigned int *);
int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool);
void xfs_bmbt_to_iomap(struct xfs_inode *, struct iomap *,

Просмотреть файл

@ -26,6 +26,7 @@
#include "xfs_dir2.h"
#include "xfs_trans_space.h"
#include "xfs_iomap.h"
#include "xfs_defer.h"
#include <linux/capability.h>
#include <linux/xattr.h>
@ -208,7 +209,7 @@ xfs_generic_create(
xfs_finish_inode_setup(ip);
if (!tmpfile)
xfs_cleanup_inode(dir, inode, dentry);
iput(inode);
xfs_irele(ip);
goto out_free_acl;
}
@ -390,7 +391,7 @@ xfs_vn_symlink(
out_cleanup_inode:
xfs_finish_inode_setup(cip);
xfs_cleanup_inode(dir, inode, dentry);
iput(inode);
xfs_irele(cip);
out:
return error;
}

Просмотреть файл

@ -114,7 +114,7 @@ xfs_bulkstat_one_int(
break;
}
xfs_iunlock(ip, XFS_ILOCK_SHARED);
IRELE(ip);
xfs_irele(ip);
error = formatter(buffer, ubsize, ubused, buf);
if (!error)
@ -458,8 +458,7 @@ xfs_bulkstat(
* pending error, then we are done.
*/
del_cursor:
xfs_btree_del_cursor(cur, error ?
XFS_BTREE_ERROR : XFS_BTREE_NOERROR);
xfs_btree_del_cursor(cur, error);
xfs_buf_relse(agbp);
if (error)
break;
@ -632,8 +631,7 @@ next_ag:
kmem_free(buffer);
if (cur)
xfs_btree_del_cursor(cur, (error ? XFS_BTREE_ERROR :
XFS_BTREE_NOERROR));
xfs_btree_del_cursor(cur, error);
if (agbp)
xfs_buf_relse(agbp);

Просмотреть файл

@ -410,7 +410,7 @@ out_error:
}
/*
* Reserve log space and return a ticket corresponding the reservation.
* Reserve log space and return a ticket corresponding to the reservation.
*
* Each reservation is going to reserve extra space for a log record header.
* When writes happen to the on-disk log, we don't subtract the length of the
@ -826,6 +826,88 @@ xfs_log_mount_cancel(
* deallocation must not be done until source-end.
*/
/* Actually write the unmount record to disk. */
static void
xfs_log_write_unmount_record(
struct xfs_mount *mp)
{
/* the data section must be 32 bit size aligned */
struct xfs_unmount_log_format magic = {
.magic = XLOG_UNMOUNT_TYPE,
};
struct xfs_log_iovec reg = {
.i_addr = &magic,
.i_len = sizeof(magic),
.i_type = XLOG_REG_TYPE_UNMOUNT,
};
struct xfs_log_vec vec = {
.lv_niovecs = 1,
.lv_iovecp = &reg,
};
struct xlog *log = mp->m_log;
struct xlog_in_core *iclog;
struct xlog_ticket *tic = NULL;
xfs_lsn_t lsn;
uint flags = XLOG_UNMOUNT_TRANS;
int error;
error = xfs_log_reserve(mp, 600, 1, &tic, XFS_LOG, 0);
if (error)
goto out_err;
/*
* If we think the summary counters are bad, clear the unmount header
* flag in the unmount record so that the summary counters will be
* recalculated during log recovery at next mount. Refer to
* xlog_check_unmount_rec for more details.
*/
if (XFS_TEST_ERROR((mp->m_flags & XFS_MOUNT_BAD_SUMMARY), mp,
XFS_ERRTAG_FORCE_SUMMARY_RECALC)) {
xfs_alert(mp, "%s: will fix summary counters at next mount",
__func__);
flags &= ~XLOG_UNMOUNT_TRANS;
}
/* remove inited flag, and account for space used */
tic->t_flags = 0;
tic->t_curr_res -= sizeof(magic);
error = xlog_write(log, &vec, tic, &lsn, NULL, flags);
/*
* At this point, we're umounting anyway, so there's no point in
* transitioning log state to IOERROR. Just continue...
*/
out_err:
if (error)
xfs_alert(mp, "%s: unmount record failed", __func__);
spin_lock(&log->l_icloglock);
iclog = log->l_iclog;
atomic_inc(&iclog->ic_refcnt);
xlog_state_want_sync(log, iclog);
spin_unlock(&log->l_icloglock);
error = xlog_state_release_iclog(log, iclog);
spin_lock(&log->l_icloglock);
switch (iclog->ic_state) {
default:
if (!XLOG_FORCED_SHUTDOWN(log)) {
xlog_wait(&iclog->ic_force_wait, &log->l_icloglock);
break;
}
/* fall through */
case XLOG_STATE_ACTIVE:
case XLOG_STATE_DIRTY:
spin_unlock(&log->l_icloglock);
break;
}
if (tic) {
trace_xfs_log_umount_write(log, tic);
xlog_ungrant_log_space(log, tic);
xfs_log_ticket_put(tic);
}
}
/*
* Unmount record used to have a string "Unmount filesystem--" in the
* data section where the "Un" was really a magic number (XLOG_UNMOUNT_TYPE).
@ -842,8 +924,6 @@ xfs_log_unmount_write(xfs_mount_t *mp)
#ifdef DEBUG
xlog_in_core_t *first_iclog;
#endif
xlog_ticket_t *tic = NULL;
xfs_lsn_t lsn;
int error;
/*
@ -870,66 +950,7 @@ xfs_log_unmount_write(xfs_mount_t *mp)
} while (iclog != first_iclog);
#endif
if (! (XLOG_FORCED_SHUTDOWN(log))) {
error = xfs_log_reserve(mp, 600, 1, &tic, XFS_LOG, 0);
if (!error) {
/* the data section must be 32 bit size aligned */
struct {
uint16_t magic;
uint16_t pad1;
uint32_t pad2; /* may as well make it 64 bits */
} magic = {
.magic = XLOG_UNMOUNT_TYPE,
};
struct xfs_log_iovec reg = {
.i_addr = &magic,
.i_len = sizeof(magic),
.i_type = XLOG_REG_TYPE_UNMOUNT,
};
struct xfs_log_vec vec = {
.lv_niovecs = 1,
.lv_iovecp = &reg,
};
/* remove inited flag, and account for space used */
tic->t_flags = 0;
tic->t_curr_res -= sizeof(magic);
error = xlog_write(log, &vec, tic, &lsn,
NULL, XLOG_UNMOUNT_TRANS);
/*
* At this point, we're umounting anyway,
* so there's no point in transitioning log state
* to IOERROR. Just continue...
*/
}
if (error)
xfs_alert(mp, "%s: unmount record failed", __func__);
spin_lock(&log->l_icloglock);
iclog = log->l_iclog;
atomic_inc(&iclog->ic_refcnt);
xlog_state_want_sync(log, iclog);
spin_unlock(&log->l_icloglock);
error = xlog_state_release_iclog(log, iclog);
spin_lock(&log->l_icloglock);
if (!(iclog->ic_state == XLOG_STATE_ACTIVE ||
iclog->ic_state == XLOG_STATE_DIRTY)) {
if (!XLOG_FORCED_SHUTDOWN(log)) {
xlog_wait(&iclog->ic_force_wait,
&log->l_icloglock);
} else {
spin_unlock(&log->l_icloglock);
}
} else {
spin_unlock(&log->l_icloglock);
}
if (tic) {
trace_xfs_log_umount_write(log, tic);
xlog_ungrant_log_space(log, tic);
xfs_log_ticket_put(tic);
}
xfs_log_write_unmount_record(mp);
} else {
/*
* We're already in forced_shutdown mode, couldn't
@ -4083,3 +4104,12 @@ xfs_log_check_lsn(
return valid;
}
bool
xfs_log_in_recovery(
struct xfs_mount *mp)
{
struct xlog *log = mp->m_log;
return log->l_flags & XLOG_ACTIVE_RECOVERY;
}

Просмотреть файл

@ -153,5 +153,6 @@ bool xfs_log_item_in_current_chkpt(struct xfs_log_item *lip);
void xfs_log_work_queue(struct xfs_mount *mp);
void xfs_log_quiesce(struct xfs_mount *mp);
bool xfs_log_check_lsn(struct xfs_mount *, xfs_lsn_t);
bool xfs_log_in_recovery(struct xfs_mount *);
#endif /* __XFS_LOG_H__ */

Просмотреть файл

@ -196,7 +196,7 @@ xlog_bread_noalign(
bp->b_io_length = nbblks;
bp->b_error = 0;
error = xfs_buf_submit_wait(bp);
error = xfs_buf_submit(bp);
if (error && !XFS_FORCED_SHUTDOWN(log->l_mp))
xfs_buf_ioerror_alert(bp, __func__);
return error;
@ -4733,10 +4733,9 @@ xlog_recover_cancel_rui(
/* Recover the CUI if necessary. */
STATIC int
xlog_recover_process_cui(
struct xfs_mount *mp,
struct xfs_trans *parent_tp,
struct xfs_ail *ailp,
struct xfs_log_item *lip,
struct xfs_defer_ops *dfops)
struct xfs_log_item *lip)
{
struct xfs_cui_log_item *cuip;
int error;
@ -4749,7 +4748,7 @@ xlog_recover_process_cui(
return 0;
spin_unlock(&ailp->ail_lock);
error = xfs_cui_recover(mp, cuip, dfops);
error = xfs_cui_recover(parent_tp, cuip);
spin_lock(&ailp->ail_lock);
return error;
@ -4774,10 +4773,9 @@ xlog_recover_cancel_cui(
/* Recover the BUI if necessary. */
STATIC int
xlog_recover_process_bui(
struct xfs_mount *mp,
struct xfs_trans *parent_tp,
struct xfs_ail *ailp,
struct xfs_log_item *lip,
struct xfs_defer_ops *dfops)
struct xfs_log_item *lip)
{
struct xfs_bui_log_item *buip;
int error;
@ -4790,7 +4788,7 @@ xlog_recover_process_bui(
return 0;
spin_unlock(&ailp->ail_lock);
error = xfs_bui_recover(mp, buip, dfops);
error = xfs_bui_recover(parent_tp, buip);
spin_lock(&ailp->ail_lock);
return error;
@ -4829,9 +4827,9 @@ static inline bool xlog_item_is_intent(struct xfs_log_item *lip)
/* Take all the collected deferred ops and finish them in order. */
static int
xlog_finish_defer_ops(
struct xfs_mount *mp,
struct xfs_defer_ops *dfops)
struct xfs_trans *parent_tp)
{
struct xfs_mount *mp = parent_tp->t_mountp;
struct xfs_trans *tp;
int64_t freeblks;
uint resblks;
@ -4854,16 +4852,10 @@ xlog_finish_defer_ops(
0, XFS_TRANS_RESERVE, &tp);
if (error)
return error;
error = xfs_defer_finish(&tp, dfops);
if (error)
goto out_cancel;
/* transfer all collected dfops to this transaction */
xfs_defer_move(tp, parent_tp);
return xfs_trans_commit(tp);
out_cancel:
xfs_trans_cancel(tp);
return error;
}
/*
@ -4886,23 +4878,34 @@ STATIC int
xlog_recover_process_intents(
struct xlog *log)
{
struct xfs_defer_ops dfops;
struct xfs_trans *parent_tp;
struct xfs_ail_cursor cur;
struct xfs_log_item *lip;
struct xfs_ail *ailp;
xfs_fsblock_t firstfsb;
int error = 0;
int error;
#if defined(DEBUG) || defined(XFS_WARN)
xfs_lsn_t last_lsn;
#endif
/*
* The intent recovery handlers commit transactions to complete recovery
* for individual intents, but any new deferred operations that are
* queued during that process are held off until the very end. The
* purpose of this transaction is to serve as a container for deferred
* operations. Each intent recovery handler must transfer dfops here
* before its local transaction commits, and we'll finish the entire
* list below.
*/
error = xfs_trans_alloc_empty(log->l_mp, &parent_tp);
if (error)
return error;
ailp = log->l_ailp;
spin_lock(&ailp->ail_lock);
lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
#if defined(DEBUG) || defined(XFS_WARN)
last_lsn = xlog_assign_lsn(log->l_curr_cycle, log->l_curr_block);
#endif
xfs_defer_init(&dfops, &firstfsb);
while (lip != NULL) {
/*
* We're done when we see something other than an intent.
@ -4937,12 +4940,10 @@ xlog_recover_process_intents(
error = xlog_recover_process_rui(log->l_mp, ailp, lip);
break;
case XFS_LI_CUI:
error = xlog_recover_process_cui(log->l_mp, ailp, lip,
&dfops);
error = xlog_recover_process_cui(parent_tp, ailp, lip);
break;
case XFS_LI_BUI:
error = xlog_recover_process_bui(log->l_mp, ailp, lip,
&dfops);
error = xlog_recover_process_bui(parent_tp, ailp, lip);
break;
}
if (error)
@ -4952,10 +4953,9 @@ xlog_recover_process_intents(
out:
xfs_trans_ail_cursor_done(&cur);
spin_unlock(&ailp->ail_lock);
if (error)
xfs_defer_cancel(&dfops);
else
error = xlog_finish_defer_ops(log->l_mp, &dfops);
if (!error)
error = xlog_finish_defer_ops(parent_tp);
xfs_trans_cancel(parent_tp);
return error;
}
@ -5094,11 +5094,11 @@ xlog_recover_process_one_iunlink(
*/
ip->i_d.di_dmevmask = 0;
IRELE(ip);
xfs_irele(ip);
return agino;
fail_iput:
IRELE(ip);
xfs_irele(ip);
fail:
/*
* We can't read in the inode this bucket points to, or this inode
@ -5707,7 +5707,7 @@ xlog_do_recover(
bp->b_flags |= XBF_READ;
bp->b_ops = &xfs_sb_buf_ops;
error = xfs_buf_submit_wait(bp);
error = xfs_buf_submit(bp);
if (error) {
if (!XFS_FORCED_SHUTDOWN(mp)) {
xfs_buf_ioerror_alert(bp, __func__);

Просмотреть файл

@ -207,6 +207,9 @@ xfs_initialize_perag(
if (xfs_buf_hash_init(pag))
goto out_free_pag;
init_waitqueue_head(&pag->pagb_wait);
spin_lock_init(&pag->pagb_lock);
pag->pagb_count = 0;
pag->pagb_tree = RB_ROOT;
if (radix_tree_preload(GFP_NOFS))
goto out_hash_destroy;
@ -606,6 +609,56 @@ xfs_default_resblks(xfs_mount_t *mp)
return resblks;
}
/* Ensure the summary counts are correct. */
STATIC int
xfs_check_summary_counts(
struct xfs_mount *mp)
{
/*
* The AG0 superblock verifier rejects in-progress filesystems,
* so we should never see the flag set this far into mounting.
*/
if (mp->m_sb.sb_inprogress) {
xfs_err(mp, "sb_inprogress set after log recovery??");
WARN_ON(1);
return -EFSCORRUPTED;
}
/*
* Now the log is mounted, we know if it was an unclean shutdown or
* not. If it was, with the first phase of recovery has completed, we
* have consistent AG blocks on disk. We have not recovered EFIs yet,
* but they are recovered transactionally in the second recovery phase
* later.
*
* If the log was clean when we mounted, we can check the summary
* counters. If any of them are obviously incorrect, we can recompute
* them from the AGF headers in the next step.
*/
if (XFS_LAST_UNMOUNT_WAS_CLEAN(mp) &&
(mp->m_sb.sb_fdblocks > mp->m_sb.sb_dblocks ||
mp->m_sb.sb_ifree > mp->m_sb.sb_icount))
mp->m_flags |= XFS_MOUNT_BAD_SUMMARY;
/*
* We can safely re-initialise incore superblock counters from the
* per-ag data. These may not be correct if the filesystem was not
* cleanly unmounted, so we waited for recovery to finish before doing
* this.
*
* If the filesystem was cleanly unmounted or the previous check did
* not flag anything weird, then we can trust the values in the
* superblock to be correct and we don't need to do anything here.
* Otherwise, recalculate the summary counters.
*/
if ((!xfs_sb_version_haslazysbcount(&mp->m_sb) ||
XFS_LAST_UNMOUNT_WAS_CLEAN(mp)) &&
!(mp->m_flags & XFS_MOUNT_BAD_SUMMARY))
return 0;
return xfs_initialize_perag_data(mp, mp->m_sb.sb_agcount);
}
/*
* This function does the following on an initial mount of a file system:
* - reads the superblock from disk and init the mount struct
@ -831,32 +884,10 @@ xfs_mountfs(
goto out_fail_wait;
}
/*
* Now the log is mounted, we know if it was an unclean shutdown or
* not. If it was, with the first phase of recovery has completed, we
* have consistent AG blocks on disk. We have not recovered EFIs yet,
* but they are recovered transactionally in the second recovery phase
* later.
*
* Hence we can safely re-initialise incore superblock counters from
* the per-ag data. These may not be correct if the filesystem was not
* cleanly unmounted, so we need to wait for recovery to finish before
* doing this.
*
* If the filesystem was cleanly unmounted, then we can trust the
* values in the superblock to be correct and we don't need to do
* anything here.
*
* If we are currently making the filesystem, the initialisation will
* fail as the perag data is in an undefined state.
*/
if (xfs_sb_version_haslazysbcount(&mp->m_sb) &&
!XFS_LAST_UNMOUNT_WAS_CLEAN(mp) &&
!mp->m_sb.sb_inprogress) {
error = xfs_initialize_perag_data(mp, sbp->sb_agcount);
if (error)
goto out_log_dealloc;
}
/* Make sure the summary counts are ok. */
error = xfs_check_summary_counts(mp);
if (error)
goto out_log_dealloc;
/*
* Get and sanity-check the root inode.
@ -1011,7 +1042,7 @@ xfs_mountfs(
out_rtunmount:
xfs_rtunmount_inodes(mp);
out_rele_rip:
IRELE(rip);
xfs_irele(rip);
/* Clean out dquots that might be in memory after quotacheck. */
xfs_qm_unmount(mp);
/*
@ -1067,7 +1098,7 @@ xfs_unmountfs(
xfs_fs_unreserve_ag_blocks(mp);
xfs_qm_unmount_quotas(mp);
xfs_rtunmount_inodes(mp);
IRELE(mp->m_rootip);
xfs_irele(mp->m_rootip);
/*
* We can potentially deadlock here if we have an inode cluster
@ -1395,3 +1426,16 @@ xfs_dev_is_read_only(
}
return 0;
}
/* Force the summary counters to be recalculated at next mount. */
void
xfs_force_summary_recalc(
struct xfs_mount *mp)
{
if (!xfs_sb_version_haslazysbcount(&mp->m_sb))
return;
spin_lock(&mp->m_sb_lock);
mp->m_flags |= XFS_MOUNT_BAD_SUMMARY;
spin_unlock(&mp->m_sb_lock);
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше