btrfs: Don't hardcode the csum size in btrfs_ordered_sum_size
Currently the function uses a hardcoded value for the checksum size of a sector. This is fine, given that we currently support only a single algorithm, whose checksum is 4 bytes == sizeof(u32). Despite not having other algorithms, btrfs' design supports using a different algorithm whith different space requirements. To future-proof the code query the size of the currently used algorithm from the in-memory copy of the super block. No functional changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Su Yue <suy.fnst@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
This commit is contained in:
Родитель
97dc231e89
Коммит
af89e0dc2c
|
@ -151,7 +151,9 @@ static inline int btrfs_ordered_sum_size(struct btrfs_fs_info *fs_info,
|
|||
unsigned long bytes)
|
||||
{
|
||||
int num_sectors = (int)DIV_ROUND_UP(bytes, fs_info->sectorsize);
|
||||
return sizeof(struct btrfs_ordered_sum) + num_sectors * sizeof(u32);
|
||||
int csum_size = btrfs_super_csum_size(fs_info->super_copy);
|
||||
|
||||
return sizeof(struct btrfs_ordered_sum) + num_sectors * csum_size;
|
||||
}
|
||||
|
||||
static inline void
|
||||
|
|
Загрузка…
Ссылка в новой задаче