Btrfs: fix incorrect compression ratio detection
Steps to reproduce: # mkfs.btrfs -f /dev/sdb # mount -t btrfs /dev/sdb /mnt -o compress=lzo # dd if=/dev/zero of=/mnt/data bs=$((33*4096)) count=1 after previous steps, inode will be detected as bad compression ratio, and NOCOMPRESS flag will be set for that inode. Reason is that compress have a max limit pages every time(128K), if a 132k write in, it will be splitted into two write(128k+4k), this bug is a leftover for commit 68bb462d42a(Btrfs: don't compress for a small write) Fix this problem by checking every time before compression, if it is a small write(<=blocksize), we bail out and fall into nocompression directly. Signed-off-by: Wang Shilong <wangshilong1991@gmail.com> Reviewed-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
This commit is contained in:
Родитель
7bdcefc103
Коммит
4bcbb33255
|
@ -411,14 +411,6 @@ static noinline int compress_file_range(struct inode *inode,
|
|||
(start > 0 || end + 1 < BTRFS_I(inode)->disk_i_size))
|
||||
btrfs_add_inode_defrag(NULL, inode);
|
||||
|
||||
/*
|
||||
* skip compression for a small file range(<=blocksize) that
|
||||
* isn't an inline extent, since it dosen't save disk space at all.
|
||||
*/
|
||||
if ((end - start + 1) <= blocksize &&
|
||||
(start > 0 || end + 1 < BTRFS_I(inode)->disk_i_size))
|
||||
goto cleanup_and_bail_uncompressed;
|
||||
|
||||
actual_end = min_t(u64, isize, end + 1);
|
||||
again:
|
||||
will_compress = 0;
|
||||
|
@ -440,6 +432,14 @@ again:
|
|||
|
||||
total_compressed = actual_end - start;
|
||||
|
||||
/*
|
||||
* skip compression for a small file range(<=blocksize) that
|
||||
* isn't an inline extent, since it dosen't save disk space at all.
|
||||
*/
|
||||
if (total_compressed <= blocksize &&
|
||||
(start > 0 || end + 1 < BTRFS_I(inode)->disk_i_size))
|
||||
goto cleanup_and_bail_uncompressed;
|
||||
|
||||
/* we want to make sure that amount of ram required to uncompress
|
||||
* an extent is reasonable, so we limit the total size in ram
|
||||
* of a compressed extent to 128k. This is a crucial number
|
||||
|
|
Загрузка…
Ссылка в новой задаче