f2fs-for-5.6
In this series, we've implemented transparent compression experimentally. It supports LZO and LZ4, but will add more later as we investigate in the field more. At this point, the feature doesn't expose compressed space to user directly in order to guarantee potential data updates later to the space. Instead, the main goal is to reduce data writes to flash disk as much as possible, resulting in extending disk life time as well as relaxing IO congestion. Alternatively, we're also considering to add ioctl() to reclaim compressed space and show it to user after putting the immutable bit. Enhancement: - add compression support - avoid unnecessary locks in quota ops - harden power-cut scenario for zoned block devices - use private bio_set to avoid IO congestion - replace GC mutex with rwsem to serialize callers Bug fix: - fix dentry consistency and memory corruption in rename()'s error case - fix wrong swap extent reports - fix casefolding bugs - change lock coverage to avoid deadlock - avoid GFP_KERNEL under f2fs_lock_op And, we've cleaned up sysfs entries to prepare no debugfs. -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEE00UqedjCtOrGVvQiQBSofoJIUNIFAl4zInwACgkQQBSofoJI UNL4Tg/+JBbVEFa3IUBGMdbjfgd/g0Jye++iMAYYGRWT6Ll/IGcHRV9NunITjgWU mBZqdhI28kXeiGCcewB1ZvivjLx22X4n6yevHk2B5A6PNe9IDCHi0HOAhJJHkjPH ecv2L+vX3Oj4y0+H7JNz9Fo3OIPJvMPtCQWlg1z+VQyhB85zNP7fZlvvIY4tG8yw ERo0YNotLqwcF1BxCwNbAhV3aJGDxar+MI//yNzpiwDX7IptVpqestfcoIYc9kKL 4kSWRyEIGwcuIeyoM6aofGS9t4Z/Oe/gdqcxNr6l5n0Q/tMTpb4b/fJFGNr6RRx9 X9NQo8flkQb2DEIOP0DVpO2aPebzsVtzg3LZUOLA83+wCHfwINtHai2Dy2zDJ2my BrVdou8fe2oxoaYihJg/Tz9cd0nA/6mZArtpYvDImAmX/xuGOvVk9zZkXNwc9nVX EyVzy0vW4lA6gAIJ95aG6DDhJcAtVoy0MhBRWG92Pufxhn9aW24AV63ChWUf9DRx /3RqpMAuQ3UC2gOxXKKnr54lsdhUIMn/y9sjROkVvQ1BvgRVxO8I4GFvMHMKv9pR 9KXiVRdzyYERyoL4+MF7A2zTnw+RHL4RVILa85p2ALGy2jQ1UuNUQi0BN9x2u1v8 S1ifNNX8SwOP+83ImFJhhn3HybpFQ45aLO3F7ZjKBQAnufJu+xw= =zeoY -----END PGP SIGNATURE----- Merge tag 'f2fs-for-5.6' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "In this series, we've implemented transparent compression experimentally. It supports LZO and LZ4, but will add more later as we investigate in the field more. At this point, the feature doesn't expose compressed space to user directly in order to guarantee potential data updates later to the space. Instead, the main goal is to reduce data writes to flash disk as much as possible, resulting in extending disk life time as well as relaxing IO congestion. Alternatively, we're also considering to add ioctl() to reclaim compressed space and show it to user after putting the immutable bit. Enhancements: - add compression support - avoid unnecessary locks in quota ops - harden power-cut scenario for zoned block devices - use private bio_set to avoid IO congestion - replace GC mutex with rwsem to serialize callers Bug fixes: - fix dentry consistency and memory corruption in rename()'s error case - fix wrong swap extent reports - fix casefolding bugs - change lock coverage to avoid deadlock - avoid GFP_KERNEL under f2fs_lock_op And, we've cleaned up sysfs entries to prepare no debugfs" * tag 'f2fs-for-5.6' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (31 commits) f2fs: fix race conditions in ->d_compare() and ->d_hash() f2fs: fix dcache lookup of !casefolded directories f2fs: Add f2fs stats to sysfs f2fs: delete duplicate information on sysfs nodes f2fs: change to use rwsem for gc_mutex f2fs: update f2fs document regarding to fsync_mode f2fs: add a way to turn off ipu bio cache f2fs: code cleanup for f2fs_statfs_project() f2fs: fix miscounted block limit in f2fs_statfs_project() f2fs: show the CP_PAUSE reason in checkpoint traces f2fs: fix deadlock allocating bio_post_read_ctx from mempool f2fs: remove unneeded check for error allocating bio_post_read_ctx f2fs: convert inline_dir early before starting rename f2fs: fix memleak of kobject f2fs: fix to add swap extent correctly f2fs: run fsck when getting bad inode during GC f2fs: support data compression f2fs: free sysfs kobject f2fs: declare nested quota_sem and remove unnecessary sems f2fs: don't put new_page twice in f2fs_rename ...
This commit is contained in:
Коммит
6e135baed8
|
@ -1,37 +1,40 @@
|
|||
What: /sys/fs/f2fs/<disk>/gc_max_sleep_time
|
||||
Date: July 2013
|
||||
Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
|
||||
Description:
|
||||
Controls the maximun sleep time for gc_thread. Time
|
||||
is in milliseconds.
|
||||
Description: Controls the maximum sleep time for gc_thread. Time
|
||||
is in milliseconds.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_min_sleep_time
|
||||
Date: July 2013
|
||||
Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
|
||||
Description:
|
||||
Controls the minimum sleep time for gc_thread. Time
|
||||
is in milliseconds.
|
||||
Description: Controls the minimum sleep time for gc_thread. Time
|
||||
is in milliseconds.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_no_gc_sleep_time
|
||||
Date: July 2013
|
||||
Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
|
||||
Description:
|
||||
Controls the default sleep time for gc_thread. Time
|
||||
is in milliseconds.
|
||||
Description: Controls the default sleep time for gc_thread. Time
|
||||
is in milliseconds.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_idle
|
||||
Date: July 2013
|
||||
Contact: "Namjae Jeon" <namjae.jeon@samsung.com>
|
||||
Description:
|
||||
Controls the victim selection policy for garbage collection.
|
||||
Description: Controls the victim selection policy for garbage collection.
|
||||
Setting gc_idle = 0(default) will disable this option. Setting
|
||||
gc_idle = 1 will select the Cost Benefit approach & setting
|
||||
gc_idle = 2 will select the greedy approach.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/reclaim_segments
|
||||
Date: October 2013
|
||||
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
|
||||
Description:
|
||||
Controls the issue rate of segment discard commands.
|
||||
Description: This parameter controls the number of prefree segments to be
|
||||
reclaimed. If the number of prefree segments is larger than
|
||||
the number of segments in the proportion to the percentage
|
||||
over total volume size, f2fs tries to conduct checkpoint to
|
||||
reclaim the prefree segments to free segments.
|
||||
By default, 5% over total # of segments.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/max_blkaddr
|
||||
What: /sys/fs/f2fs/<disk>/main_blkaddr
|
||||
Date: November 2019
|
||||
Contact: "Ramon Pantin" <pantin@google.com>
|
||||
Description:
|
||||
|
@ -40,227 +43,278 @@ Description:
|
|||
What: /sys/fs/f2fs/<disk>/ipu_policy
|
||||
Date: November 2013
|
||||
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
|
||||
Description:
|
||||
Controls the in-place-update policy.
|
||||
Description: Controls the in-place-update policy.
|
||||
updates in f2fs. User can set:
|
||||
0x01: F2FS_IPU_FORCE, 0x02: F2FS_IPU_SSR,
|
||||
0x04: F2FS_IPU_UTIL, 0x08: F2FS_IPU_SSR_UTIL,
|
||||
0x10: F2FS_IPU_FSYNC, 0x20: F2FS_IPU_ASYNC,
|
||||
0x40: F2FS_IPU_NOCACHE.
|
||||
Refer segment.h for details.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/min_ipu_util
|
||||
Date: November 2013
|
||||
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
|
||||
Description:
|
||||
Controls the FS utilization condition for the in-place-update
|
||||
policies.
|
||||
Description: Controls the FS utilization condition for the in-place-update
|
||||
policies. It is used by F2FS_IPU_UTIL and F2FS_IPU_SSR_UTIL policies.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/min_fsync_blocks
|
||||
Date: September 2014
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Controls the dirty page count condition for the in-place-update
|
||||
policies.
|
||||
Description: Controls the dirty page count condition for the in-place-update
|
||||
policies.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/min_seq_blocks
|
||||
Date: August 2018
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Controls the dirty page count condition for batched sequential
|
||||
writes in ->writepages.
|
||||
|
||||
Description: Controls the dirty page count condition for batched sequential
|
||||
writes in writepages.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/min_hot_blocks
|
||||
Date: March 2017
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Controls the dirty page count condition for redefining hot data.
|
||||
Description: Controls the dirty page count condition for redefining hot data.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/min_ssr_sections
|
||||
Date: October 2017
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Description:
|
||||
Controls the fee section threshold to trigger SSR allocation.
|
||||
Description: Controls the free section threshold to trigger SSR allocation.
|
||||
If this is large, SSR mode will be enabled early.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/max_small_discards
|
||||
Date: November 2013
|
||||
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
|
||||
Description:
|
||||
Controls the issue rate of small discard commands.
|
||||
Description: Controls the issue rate of discard commands that consist of small
|
||||
blocks less than 2MB. The candidates to be discarded are cached until
|
||||
checkpoint is triggered, and issued during the checkpoint.
|
||||
By default, it is disabled with 0.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/discard_granularity
|
||||
Date: July 2017
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Description:
|
||||
Controls discard granularity of inner discard thread, inner thread
|
||||
What: /sys/fs/f2fs/<disk>/discard_granularity
|
||||
Date: July 2017
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Description: Controls discard granularity of inner discard thread. Inner thread
|
||||
will not issue discards with size that is smaller than granularity.
|
||||
The unit size is one block, now only support configuring in range
|
||||
of [1, 512].
|
||||
The unit size is one block(4KB), now only support configuring
|
||||
in range of [1, 512]. Default value is 4(=16KB).
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/umount_discard_timeout
|
||||
Date: January 2019
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Set timeout to issue discard commands during umount.
|
||||
Default: 5 secs
|
||||
What: /sys/fs/f2fs/<disk>/umount_discard_timeout
|
||||
Date: January 2019
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description: Set timeout to issue discard commands during umount.
|
||||
Default: 5 secs
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/max_victim_search
|
||||
Date: January 2014
|
||||
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
|
||||
Description:
|
||||
Controls the number of trials to find a victim segment.
|
||||
Description: Controls the number of trials to find a victim segment
|
||||
when conducting SSR and cleaning operations. The default value
|
||||
is 4096 which covers 8GB block address range.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/migration_granularity
|
||||
Date: October 2018
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Description:
|
||||
Controls migration granularity of garbage collection on large
|
||||
section, it can let GC move partial segment{s} of one section
|
||||
in one GC cycle, so that dispersing heavy overhead GC to
|
||||
multiple lightweight one.
|
||||
Description: Controls migration granularity of garbage collection on large
|
||||
section, it can let GC move partial segment{s} of one section
|
||||
in one GC cycle, so that dispersing heavy overhead GC to
|
||||
multiple lightweight one.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/dir_level
|
||||
Date: March 2014
|
||||
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
|
||||
Description:
|
||||
Controls the directory level for large directory.
|
||||
Description: Controls the directory level for large directory. If a
|
||||
directory has a number of files, it can reduce the file lookup
|
||||
latency by increasing this dir_level value. Otherwise, it
|
||||
needs to decrease this value to reduce the space overhead.
|
||||
The default value is 0.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/ram_thresh
|
||||
Date: March 2014
|
||||
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
|
||||
Description:
|
||||
Controls the memory footprint used by f2fs.
|
||||
Description: Controls the memory footprint used by free nids and cached
|
||||
nat entries. By default, 1 is set, which indicates
|
||||
10 MB / 1 GB RAM.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/batched_trim_sections
|
||||
Date: February 2015
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Controls the trimming rate in batch mode.
|
||||
<deprecated>
|
||||
Description: Controls the trimming rate in batch mode.
|
||||
<deprecated>
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/cp_interval
|
||||
Date: October 2015
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Controls the checkpoint timing.
|
||||
Description: Controls the checkpoint timing, set to 60 seconds by default.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/idle_interval
|
||||
Date: January 2016
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Controls the idle timing for all paths other than
|
||||
discard and gc path.
|
||||
Description: Controls the idle timing of system, if there is no FS operation
|
||||
during given interval.
|
||||
Set to 5 seconds by default.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/discard_idle_interval
|
||||
Date: September 2018
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Contact: "Sahitya Tummala" <stummala@codeaurora.org>
|
||||
Description:
|
||||
Controls the idle timing for discard path.
|
||||
Description: Controls the idle timing of discard thread given
|
||||
this time interval.
|
||||
Default is 5 secs.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_idle_interval
|
||||
Date: September 2018
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Contact: "Sahitya Tummala" <stummala@codeaurora.org>
|
||||
Description:
|
||||
Controls the idle timing for gc path.
|
||||
Description: Controls the idle timing for gc path. Set to 5 seconds by default.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/iostat_enable
|
||||
Date: August 2017
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Description:
|
||||
Controls to enable/disable IO stat.
|
||||
Description: Controls to enable/disable IO stat.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/ra_nid_pages
|
||||
Date: October 2015
|
||||
Contact: "Chao Yu" <chao2.yu@samsung.com>
|
||||
Description:
|
||||
Controls the count of nid pages to be readaheaded.
|
||||
Description: Controls the count of nid pages to be readaheaded.
|
||||
When building free nids, F2FS reads NAT blocks ahead for
|
||||
speed up. Default is 0.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/dirty_nats_ratio
|
||||
Date: January 2016
|
||||
Contact: "Chao Yu" <chao2.yu@samsung.com>
|
||||
Description:
|
||||
Controls dirty nat entries ratio threshold, if current
|
||||
ratio exceeds configured threshold, checkpoint will
|
||||
be triggered for flushing dirty nat entries.
|
||||
Description: Controls dirty nat entries ratio threshold, if current
|
||||
ratio exceeds configured threshold, checkpoint will
|
||||
be triggered for flushing dirty nat entries.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/lifetime_write_kbytes
|
||||
Date: January 2016
|
||||
Contact: "Shuoran Liu" <liushuoran@huawei.com>
|
||||
Description:
|
||||
Shows total written kbytes issued to disk.
|
||||
Description: Shows total written kbytes issued to disk.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/features
|
||||
Date: July 2017
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Shows all enabled features in current device.
|
||||
Description: Shows all enabled features in current device.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/inject_rate
|
||||
Date: May 2016
|
||||
Contact: "Sheng Yong" <shengyong1@huawei.com>
|
||||
Description:
|
||||
Controls the injection rate.
|
||||
Description: Controls the injection rate of arbitrary faults.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/inject_type
|
||||
Date: May 2016
|
||||
Contact: "Sheng Yong" <shengyong1@huawei.com>
|
||||
Description:
|
||||
Controls the injection type.
|
||||
Description: Controls the injection type of arbitrary faults.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/dirty_segments
|
||||
Date: October 2017
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description: Shows the number of dirty segments.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/reserved_blocks
|
||||
Date: June 2017
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Description:
|
||||
Controls target reserved blocks in system, the threshold
|
||||
is soft, it could exceed current available user space.
|
||||
Description: Controls target reserved blocks in system, the threshold
|
||||
is soft, it could exceed current available user space.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/current_reserved_blocks
|
||||
Date: October 2017
|
||||
Contact: "Yunlong Song" <yunlong.song@huawei.com>
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Description:
|
||||
Shows current reserved blocks in system, it may be temporarily
|
||||
smaller than target_reserved_blocks, but will gradually
|
||||
increase to target_reserved_blocks when more free blocks are
|
||||
freed by user later.
|
||||
Description: Shows current reserved blocks in system, it may be temporarily
|
||||
smaller than target_reserved_blocks, but will gradually
|
||||
increase to target_reserved_blocks when more free blocks are
|
||||
freed by user later.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_urgent
|
||||
Date: August 2017
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Do background GC agressively
|
||||
Description: Do background GC agressively when set. When gc_urgent = 1,
|
||||
background thread starts to do GC by given gc_urgent_sleep_time
|
||||
interval. It is set to 0 by default.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_urgent_sleep_time
|
||||
Date: August 2017
|
||||
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
|
||||
Description:
|
||||
Controls sleep time of GC urgent mode
|
||||
Description: Controls sleep time of GC urgent mode. Set to 500ms by default.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/readdir_ra
|
||||
Date: November 2017
|
||||
Contact: "Sheng Yong" <shengyong1@huawei.com>
|
||||
Description:
|
||||
Controls readahead inode block in readdir.
|
||||
Description: Controls readahead inode block in readdir. Enabled by default.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_pin_file_thresh
|
||||
Date: January 2018
|
||||
Contact: Jaegeuk Kim <jaegeuk@kernel.org>
|
||||
Description: This indicates how many GC can be failed for the pinned
|
||||
file. If it exceeds this, F2FS doesn't guarantee its pinning
|
||||
state. 2048 trials is set by default.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/extension_list
|
||||
Date: Feburary 2018
|
||||
Contact: "Chao Yu" <yuchao0@huawei.com>
|
||||
Description:
|
||||
Used to control configure extension list:
|
||||
- Query: cat /sys/fs/f2fs/<disk>/extension_list
|
||||
- Add: echo '[h/c]extension' > /sys/fs/f2fs/<disk>/extension_list
|
||||
- Del: echo '[h/c]!extension' > /sys/fs/f2fs/<disk>/extension_list
|
||||
- [h] means add/del hot file extension
|
||||
- [c] means add/del cold file extension
|
||||
Description: Used to control configure extension list:
|
||||
- Query: cat /sys/fs/f2fs/<disk>/extension_list
|
||||
- Add: echo '[h/c]extension' > /sys/fs/f2fs/<disk>/extension_list
|
||||
- Del: echo '[h/c]!extension' > /sys/fs/f2fs/<disk>/extension_list
|
||||
- [h] means add/del hot file extension
|
||||
- [c] means add/del cold file extension
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/unusable
|
||||
Date April 2019
|
||||
Contact: "Daniel Rosenberg" <drosen@google.com>
|
||||
Description:
|
||||
If checkpoint=disable, it displays the number of blocks that are unusable.
|
||||
If checkpoint=enable it displays the enumber of blocks that would be unusable
|
||||
if checkpoint=disable were to be set.
|
||||
Description: If checkpoint=disable, it displays the number of blocks that
|
||||
are unusable.
|
||||
If checkpoint=enable it displays the enumber of blocks that
|
||||
would be unusable if checkpoint=disable were to be set.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/encoding
|
||||
Date July 2019
|
||||
Contact: "Daniel Rosenberg" <drosen@google.com>
|
||||
Description:
|
||||
Displays name and version of the encoding set for the filesystem.
|
||||
If no encoding is set, displays (none)
|
||||
Description: Displays name and version of the encoding set for the filesystem.
|
||||
If no encoding is set, displays (none)
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/free_segments
|
||||
Date: September 2019
|
||||
Contact: "Hridya Valsaraju" <hridya@google.com>
|
||||
Description: Number of free segments in disk.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/cp_foreground_calls
|
||||
Date: September 2019
|
||||
Contact: "Hridya Valsaraju" <hridya@google.com>
|
||||
Description: Number of checkpoint operations performed on demand. Available when
|
||||
CONFIG_F2FS_STAT_FS=y.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/cp_background_calls
|
||||
Date: September 2019
|
||||
Contact: "Hridya Valsaraju" <hridya@google.com>
|
||||
Description: Number of checkpoint operations performed in the background to
|
||||
free segments. Available when CONFIG_F2FS_STAT_FS=y.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_foreground_calls
|
||||
Date: September 2019
|
||||
Contact: "Hridya Valsaraju" <hridya@google.com>
|
||||
Description: Number of garbage collection operations performed on demand.
|
||||
Available when CONFIG_F2FS_STAT_FS=y.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/gc_background_calls
|
||||
Date: September 2019
|
||||
Contact: "Hridya Valsaraju" <hridya@google.com>
|
||||
Description: Number of garbage collection operations triggered in background.
|
||||
Available when CONFIG_F2FS_STAT_FS=y.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/moved_blocks_foreground
|
||||
Date: September 2019
|
||||
Contact: "Hridya Valsaraju" <hridya@google.com>
|
||||
Description: Number of blocks moved by garbage collection in foreground.
|
||||
Available when CONFIG_F2FS_STAT_FS=y.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/moved_blocks_background
|
||||
Date: September 2019
|
||||
Contact: "Hridya Valsaraju" <hridya@google.com>
|
||||
Description: Number of blocks moved by garbage collection in background.
|
||||
Available when CONFIG_F2FS_STAT_FS=y.
|
||||
|
||||
What: /sys/fs/f2fs/<disk>/avg_vblocks
|
||||
Date: September 2019
|
||||
Contact: "Hridya Valsaraju" <hridya@google.com>
|
||||
Description: Average number of valid blocks.
|
||||
Available when CONFIG_F2FS_STAT_FS=y.
|
||||
|
|
|
@ -235,6 +235,17 @@ checkpoint=%s[:%u[%]] Set to "disable" to turn off checkpointing. Set to "en
|
|||
hide up to all remaining free space. The actual space that
|
||||
would be unusable can be viewed at /sys/fs/f2fs/<disk>/unusable
|
||||
This space is reclaimed once checkpoint=enable.
|
||||
compress_algorithm=%s Control compress algorithm, currently f2fs supports "lzo"
|
||||
and "lz4" algorithm.
|
||||
compress_log_size=%u Support configuring compress cluster size, the size will
|
||||
be 4KB * (1 << %u), 16KB is minimum size, also it's
|
||||
default size.
|
||||
compress_extension=%s Support adding specified extension, so that f2fs can enable
|
||||
compression on those corresponding files, e.g. if all files
|
||||
with '.ext' has high compression rate, we can set the '.ext'
|
||||
on compression extension list and enable compression on
|
||||
these file by default rather than to enable it via ioctl.
|
||||
For other files, we can still enable compression via ioctl.
|
||||
|
||||
================================================================================
|
||||
DEBUGFS ENTRIES
|
||||
|
@ -259,170 +270,6 @@ The files in each per-device directory are shown in table below.
|
|||
|
||||
Files in /sys/fs/f2fs/<devname>
|
||||
(see also Documentation/ABI/testing/sysfs-fs-f2fs)
|
||||
..............................................................................
|
||||
File Content
|
||||
|
||||
gc_urgent_sleep_time This parameter controls sleep time for gc_urgent.
|
||||
500 ms is set by default. See above gc_urgent.
|
||||
|
||||
gc_min_sleep_time This tuning parameter controls the minimum sleep
|
||||
time for the garbage collection thread. Time is
|
||||
in milliseconds.
|
||||
|
||||
gc_max_sleep_time This tuning parameter controls the maximum sleep
|
||||
time for the garbage collection thread. Time is
|
||||
in milliseconds.
|
||||
|
||||
gc_no_gc_sleep_time This tuning parameter controls the default sleep
|
||||
time for the garbage collection thread. Time is
|
||||
in milliseconds.
|
||||
|
||||
gc_idle This parameter controls the selection of victim
|
||||
policy for garbage collection. Setting gc_idle = 0
|
||||
(default) will disable this option. Setting
|
||||
gc_idle = 1 will select the Cost Benefit approach
|
||||
& setting gc_idle = 2 will select the greedy approach.
|
||||
|
||||
gc_urgent This parameter controls triggering background GCs
|
||||
urgently or not. Setting gc_urgent = 0 [default]
|
||||
makes back to default behavior, while if it is set
|
||||
to 1, background thread starts to do GC by given
|
||||
gc_urgent_sleep_time interval.
|
||||
|
||||
reclaim_segments This parameter controls the number of prefree
|
||||
segments to be reclaimed. If the number of prefree
|
||||
segments is larger than the number of segments
|
||||
in the proportion to the percentage over total
|
||||
volume size, f2fs tries to conduct checkpoint to
|
||||
reclaim the prefree segments to free segments.
|
||||
By default, 5% over total # of segments.
|
||||
|
||||
main_blkaddr This value gives the first block address of
|
||||
MAIN area in the partition.
|
||||
|
||||
max_small_discards This parameter controls the number of discard
|
||||
commands that consist small blocks less than 2MB.
|
||||
The candidates to be discarded are cached until
|
||||
checkpoint is triggered, and issued during the
|
||||
checkpoint. By default, it is disabled with 0.
|
||||
|
||||
discard_granularity This parameter controls the granularity of discard
|
||||
command size. It will issue discard commands iif
|
||||
the size is larger than given granularity. Its
|
||||
unit size is 4KB, and 4 (=16KB) is set by default.
|
||||
The maximum value is 128 (=512KB).
|
||||
|
||||
reserved_blocks This parameter indicates the number of blocks that
|
||||
f2fs reserves internally for root.
|
||||
|
||||
batched_trim_sections This parameter controls the number of sections
|
||||
to be trimmed out in batch mode when FITRIM
|
||||
conducts. 32 sections is set by default.
|
||||
|
||||
ipu_policy This parameter controls the policy of in-place
|
||||
updates in f2fs. There are five policies:
|
||||
0x01: F2FS_IPU_FORCE, 0x02: F2FS_IPU_SSR,
|
||||
0x04: F2FS_IPU_UTIL, 0x08: F2FS_IPU_SSR_UTIL,
|
||||
0x10: F2FS_IPU_FSYNC.
|
||||
|
||||
min_ipu_util This parameter controls the threshold to trigger
|
||||
in-place-updates. The number indicates percentage
|
||||
of the filesystem utilization, and used by
|
||||
F2FS_IPU_UTIL and F2FS_IPU_SSR_UTIL policies.
|
||||
|
||||
min_fsync_blocks This parameter controls the threshold to trigger
|
||||
in-place-updates when F2FS_IPU_FSYNC mode is set.
|
||||
The number indicates the number of dirty pages
|
||||
when fsync needs to flush on its call path. If
|
||||
the number is less than this value, it triggers
|
||||
in-place-updates.
|
||||
|
||||
min_seq_blocks This parameter controls the threshold to serialize
|
||||
write IOs issued by multiple threads in parallel.
|
||||
|
||||
min_hot_blocks This parameter controls the threshold to allocate
|
||||
a hot data log for pending data blocks to write.
|
||||
|
||||
min_ssr_sections This parameter adds the threshold when deciding
|
||||
SSR block allocation. If this is large, SSR mode
|
||||
will be enabled early.
|
||||
|
||||
ram_thresh This parameter controls the memory footprint used
|
||||
by free nids and cached nat entries. By default,
|
||||
1 is set, which indicates 10 MB / 1 GB RAM.
|
||||
|
||||
ra_nid_pages When building free nids, F2FS reads NAT blocks
|
||||
ahead for speed up. Default is 0.
|
||||
|
||||
dirty_nats_ratio Given dirty ratio of cached nat entries, F2FS
|
||||
determines flushing them in background.
|
||||
|
||||
max_victim_search This parameter controls the number of trials to
|
||||
find a victim segment when conducting SSR and
|
||||
cleaning operations. The default value is 4096
|
||||
which covers 8GB block address range.
|
||||
|
||||
migration_granularity For large-sized sections, F2FS can stop GC given
|
||||
this granularity instead of reclaiming entire
|
||||
section.
|
||||
|
||||
dir_level This parameter controls the directory level to
|
||||
support large directory. If a directory has a
|
||||
number of files, it can reduce the file lookup
|
||||
latency by increasing this dir_level value.
|
||||
Otherwise, it needs to decrease this value to
|
||||
reduce the space overhead. The default value is 0.
|
||||
|
||||
cp_interval F2FS tries to do checkpoint periodically, 60 secs
|
||||
by default.
|
||||
|
||||
idle_interval F2FS detects system is idle, if there's no F2FS
|
||||
operations during given interval, 5 secs by
|
||||
default.
|
||||
|
||||
discard_idle_interval F2FS detects the discard thread is idle, given
|
||||
time interval. Default is 5 secs.
|
||||
|
||||
gc_idle_interval F2FS detects the GC thread is idle, given time
|
||||
interval. Default is 5 secs.
|
||||
|
||||
umount_discard_timeout When unmounting the disk, F2FS waits for finishing
|
||||
queued discard commands which can take huge time.
|
||||
This gives time out for it, 5 secs by default.
|
||||
|
||||
iostat_enable This controls to enable/disable iostat in F2FS.
|
||||
|
||||
readdir_ra This enables/disabled readahead of inode blocks
|
||||
in readdir, and default is enabled.
|
||||
|
||||
gc_pin_file_thresh This indicates how many GC can be failed for the
|
||||
pinned file. If it exceeds this, F2FS doesn't
|
||||
guarantee its pinning state. 2048 trials is set
|
||||
by default.
|
||||
|
||||
extension_list This enables to change extension_list for hot/cold
|
||||
files in runtime.
|
||||
|
||||
inject_rate This controls injection rate of arbitrary faults.
|
||||
|
||||
inject_type This controls injection type of arbitrary faults.
|
||||
|
||||
dirty_segments This shows # of dirty segments.
|
||||
|
||||
lifetime_write_kbytes This shows # of data written to the disk.
|
||||
|
||||
features This shows current features enabled on F2FS.
|
||||
|
||||
current_reserved_blocks This shows # of blocks currently reserved.
|
||||
|
||||
unusable If checkpoint=disable, this shows the number of
|
||||
blocks that are unusable.
|
||||
If checkpoint=enable it shows the number of blocks
|
||||
that would be unusable if checkpoint=disable were
|
||||
to be set.
|
||||
|
||||
encoding This shows the encoding used for casefolding.
|
||||
If casefolding is not enabled, returns (none)
|
||||
|
||||
================================================================================
|
||||
USAGE
|
||||
|
@ -840,3 +687,44 @@ zero or random data, which is useful to the below scenario where:
|
|||
4. address = fibmap(fd, offset)
|
||||
5. open(blkdev)
|
||||
6. write(blkdev, address)
|
||||
|
||||
Compression implementation
|
||||
--------------------------
|
||||
|
||||
- New term named cluster is defined as basic unit of compression, file can
|
||||
be divided into multiple clusters logically. One cluster includes 4 << n
|
||||
(n >= 0) logical pages, compression size is also cluster size, each of
|
||||
cluster can be compressed or not.
|
||||
|
||||
- In cluster metadata layout, one special block address is used to indicate
|
||||
cluster is compressed one or normal one, for compressed cluster, following
|
||||
metadata maps cluster to [1, 4 << n - 1] physical blocks, in where f2fs
|
||||
stores data including compress header and compressed data.
|
||||
|
||||
- In order to eliminate write amplification during overwrite, F2FS only
|
||||
support compression on write-once file, data can be compressed only when
|
||||
all logical blocks in file are valid and cluster compress ratio is lower
|
||||
than specified threshold.
|
||||
|
||||
- To enable compression on regular inode, there are three ways:
|
||||
* chattr +c file
|
||||
* chattr +c dir; touch dir/file
|
||||
* mount w/ -o compress_extension=ext; touch file.ext
|
||||
|
||||
Compress metadata layout:
|
||||
[Dnode Structure]
|
||||
+-----------------------------------------------+
|
||||
| cluster 1 | cluster 2 | ......... | cluster N |
|
||||
+-----------------------------------------------+
|
||||
. . . .
|
||||
. . . .
|
||||
. Compressed Cluster . . Normal Cluster .
|
||||
+----------+---------+---------+---------+ +---------+---------+---------+---------+
|
||||
|compr flag| block 1 | block 2 | block 3 | | block 1 | block 2 | block 3 | block 4 |
|
||||
+----------+---------+---------+---------+ +---------+---------+---------+---------+
|
||||
. .
|
||||
. .
|
||||
. .
|
||||
+-------------+-------------+----------+----------------------------+
|
||||
| data length | data chksum | reserved | compressed data |
|
||||
+-------------+-------------+----------+----------------------------+
|
||||
|
|
|
@ -22,7 +22,7 @@ config F2FS_FS
|
|||
|
||||
config F2FS_STAT_FS
|
||||
bool "F2FS Status Information"
|
||||
depends on F2FS_FS && DEBUG_FS
|
||||
depends on F2FS_FS
|
||||
default y
|
||||
help
|
||||
/sys/kernel/debug/f2fs/ contains information about all the partitions
|
||||
|
@ -93,3 +93,28 @@ config F2FS_FAULT_INJECTION
|
|||
Test F2FS to inject faults such as ENOMEM, ENOSPC, and so on.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config F2FS_FS_COMPRESSION
|
||||
bool "F2FS compression feature"
|
||||
depends on F2FS_FS
|
||||
help
|
||||
Enable filesystem-level compression on f2fs regular files,
|
||||
multiple back-end compression algorithms are supported.
|
||||
|
||||
config F2FS_FS_LZO
|
||||
bool "LZO compression support"
|
||||
depends on F2FS_FS_COMPRESSION
|
||||
select LZO_COMPRESS
|
||||
select LZO_DECOMPRESS
|
||||
default y
|
||||
help
|
||||
Support LZO compress algorithm, if unsure, say Y.
|
||||
|
||||
config F2FS_FS_LZ4
|
||||
bool "LZ4 compression support"
|
||||
depends on F2FS_FS_COMPRESSION
|
||||
select LZ4_COMPRESS
|
||||
select LZ4_DECOMPRESS
|
||||
default y
|
||||
help
|
||||
Support LZ4 compress algorithm, if unsure, say Y.
|
||||
|
|
|
@ -9,3 +9,4 @@ f2fs-$(CONFIG_F2FS_FS_XATTR) += xattr.o
|
|||
f2fs-$(CONFIG_F2FS_FS_POSIX_ACL) += acl.o
|
||||
f2fs-$(CONFIG_F2FS_IO_TRACE) += trace.o
|
||||
f2fs-$(CONFIG_FS_VERITY) += verity.o
|
||||
f2fs-$(CONFIG_F2FS_FS_COMPRESSION) += compress.o
|
||||
|
|
|
@ -1509,10 +1509,10 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
|
|||
f2fs_wait_on_all_pages_writeback(sbi);
|
||||
|
||||
/*
|
||||
* invalidate intermediate page cache borrowed from meta inode
|
||||
* which are used for migration of encrypted inode's blocks.
|
||||
* invalidate intermediate page cache borrowed from meta inode which are
|
||||
* used for migration of encrypted or verity inode's blocks.
|
||||
*/
|
||||
if (f2fs_sb_has_encrypt(sbi))
|
||||
if (f2fs_sb_has_encrypt(sbi) || f2fs_sb_has_verity(sbi))
|
||||
invalidate_mapping_pages(META_MAPPING(sbi),
|
||||
MAIN_BLKADDR(sbi), MAX_BLKADDR(sbi) - 1);
|
||||
|
||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
740
fs/f2fs/data.c
740
fs/f2fs/data.c
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -21,9 +21,45 @@
|
|||
#include "gc.h"
|
||||
|
||||
static LIST_HEAD(f2fs_stat_list);
|
||||
static struct dentry *f2fs_debugfs_root;
|
||||
static DEFINE_MUTEX(f2fs_stat_mutex);
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
static struct dentry *f2fs_debugfs_root;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* This function calculates BDF of every segments
|
||||
*/
|
||||
void f2fs_update_sit_info(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct f2fs_stat_info *si = F2FS_STAT(sbi);
|
||||
unsigned long long blks_per_sec, hblks_per_sec, total_vblocks;
|
||||
unsigned long long bimodal, dist;
|
||||
unsigned int segno, vblocks;
|
||||
int ndirty = 0;
|
||||
|
||||
bimodal = 0;
|
||||
total_vblocks = 0;
|
||||
blks_per_sec = BLKS_PER_SEC(sbi);
|
||||
hblks_per_sec = blks_per_sec / 2;
|
||||
for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) {
|
||||
vblocks = get_valid_blocks(sbi, segno, true);
|
||||
dist = abs(vblocks - hblks_per_sec);
|
||||
bimodal += dist * dist;
|
||||
|
||||
if (vblocks > 0 && vblocks < blks_per_sec) {
|
||||
total_vblocks += vblocks;
|
||||
ndirty++;
|
||||
}
|
||||
}
|
||||
dist = div_u64(MAIN_SECS(sbi) * hblks_per_sec * hblks_per_sec, 100);
|
||||
si->bimodal = div64_u64(bimodal, dist);
|
||||
if (si->dirty_count)
|
||||
si->avg_vblocks = div_u64(total_vblocks, ndirty);
|
||||
else
|
||||
si->avg_vblocks = 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
static void update_general_status(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct f2fs_stat_info *si = F2FS_STAT(sbi);
|
||||
|
@ -56,7 +92,7 @@ static void update_general_status(struct f2fs_sb_info *sbi)
|
|||
si->nquota_files = sbi->nquota_files;
|
||||
si->ndirty_all = sbi->ndirty_inode[DIRTY_META];
|
||||
si->inmem_pages = get_pages(sbi, F2FS_INMEM_PAGES);
|
||||
si->aw_cnt = atomic_read(&sbi->aw_cnt);
|
||||
si->aw_cnt = sbi->atomic_files;
|
||||
si->vw_cnt = atomic_read(&sbi->vw_cnt);
|
||||
si->max_aw_cnt = atomic_read(&sbi->max_aw_cnt);
|
||||
si->max_vw_cnt = atomic_read(&sbi->max_vw_cnt);
|
||||
|
@ -94,6 +130,8 @@ static void update_general_status(struct f2fs_sb_info *sbi)
|
|||
si->inline_xattr = atomic_read(&sbi->inline_xattr);
|
||||
si->inline_inode = atomic_read(&sbi->inline_inode);
|
||||
si->inline_dir = atomic_read(&sbi->inline_dir);
|
||||
si->compr_inode = atomic_read(&sbi->compr_inode);
|
||||
si->compr_blocks = atomic_read(&sbi->compr_blocks);
|
||||
si->append = sbi->im[APPEND_INO].ino_num;
|
||||
si->update = sbi->im[UPDATE_INO].ino_num;
|
||||
si->orphans = sbi->im[ORPHAN_INO].ino_num;
|
||||
|
@ -114,7 +152,6 @@ static void update_general_status(struct f2fs_sb_info *sbi)
|
|||
si->free_nids = NM_I(sbi)->nid_cnt[FREE_NID];
|
||||
si->avail_nids = NM_I(sbi)->available_nids;
|
||||
si->alloc_nids = NM_I(sbi)->nid_cnt[PREALLOC_NID];
|
||||
si->bg_gc = sbi->bg_gc;
|
||||
si->io_skip_bggc = sbi->io_skip_bggc;
|
||||
si->other_skip_bggc = sbi->other_skip_bggc;
|
||||
si->skipped_atomic_files[BG_GC] = sbi->skipped_atomic_files[BG_GC];
|
||||
|
@ -145,39 +182,6 @@ static void update_general_status(struct f2fs_sb_info *sbi)
|
|||
si->inplace_count = atomic_read(&sbi->inplace_count);
|
||||
}
|
||||
|
||||
/*
|
||||
* This function calculates BDF of every segments
|
||||
*/
|
||||
static void update_sit_info(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
struct f2fs_stat_info *si = F2FS_STAT(sbi);
|
||||
unsigned long long blks_per_sec, hblks_per_sec, total_vblocks;
|
||||
unsigned long long bimodal, dist;
|
||||
unsigned int segno, vblocks;
|
||||
int ndirty = 0;
|
||||
|
||||
bimodal = 0;
|
||||
total_vblocks = 0;
|
||||
blks_per_sec = BLKS_PER_SEC(sbi);
|
||||
hblks_per_sec = blks_per_sec / 2;
|
||||
for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) {
|
||||
vblocks = get_valid_blocks(sbi, segno, true);
|
||||
dist = abs(vblocks - hblks_per_sec);
|
||||
bimodal += dist * dist;
|
||||
|
||||
if (vblocks > 0 && vblocks < blks_per_sec) {
|
||||
total_vblocks += vblocks;
|
||||
ndirty++;
|
||||
}
|
||||
}
|
||||
dist = div_u64(MAIN_SECS(sbi) * hblks_per_sec * hblks_per_sec, 100);
|
||||
si->bimodal = div64_u64(bimodal, dist);
|
||||
if (si->dirty_count)
|
||||
si->avg_vblocks = div_u64(total_vblocks, ndirty);
|
||||
else
|
||||
si->avg_vblocks = 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function calculates memory footprint.
|
||||
*/
|
||||
|
@ -315,6 +319,8 @@ static int stat_show(struct seq_file *s, void *v)
|
|||
si->inline_inode);
|
||||
seq_printf(s, " - Inline_dentry Inode: %u\n",
|
||||
si->inline_dir);
|
||||
seq_printf(s, " - Compressed Inode: %u, Blocks: %u\n",
|
||||
si->compr_inode, si->compr_blocks);
|
||||
seq_printf(s, " - Orphan/Append/Update Inode: %u, %u, %u\n",
|
||||
si->orphans, si->append, si->update);
|
||||
seq_printf(s, "\nMain area: %d segs, %d secs %d zones\n",
|
||||
|
@ -441,7 +447,7 @@ static int stat_show(struct seq_file *s, void *v)
|
|||
si->block_count[LFS], si->segment_count[LFS]);
|
||||
|
||||
/* segment usage info */
|
||||
update_sit_info(si->sbi);
|
||||
f2fs_update_sit_info(si->sbi);
|
||||
seq_printf(s, "\nBDF: %u, avg. vblocks: %u\n",
|
||||
si->bimodal, si->avg_vblocks);
|
||||
|
||||
|
@ -461,6 +467,7 @@ static int stat_show(struct seq_file *s, void *v)
|
|||
}
|
||||
|
||||
DEFINE_SHOW_ATTRIBUTE(stat);
|
||||
#endif
|
||||
|
||||
int f2fs_build_stats(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
|
@ -491,11 +498,12 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
|
|||
atomic_set(&sbi->inline_xattr, 0);
|
||||
atomic_set(&sbi->inline_inode, 0);
|
||||
atomic_set(&sbi->inline_dir, 0);
|
||||
atomic_set(&sbi->compr_inode, 0);
|
||||
atomic_set(&sbi->compr_blocks, 0);
|
||||
atomic_set(&sbi->inplace_count, 0);
|
||||
for (i = META_CP; i < META_MAX; i++)
|
||||
atomic_set(&sbi->meta_count[i], 0);
|
||||
|
||||
atomic_set(&sbi->aw_cnt, 0);
|
||||
atomic_set(&sbi->vw_cnt, 0);
|
||||
atomic_set(&sbi->max_aw_cnt, 0);
|
||||
atomic_set(&sbi->max_vw_cnt, 0);
|
||||
|
@ -520,14 +528,18 @@ void f2fs_destroy_stats(struct f2fs_sb_info *sbi)
|
|||
|
||||
void __init f2fs_create_root_stats(void)
|
||||
{
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
f2fs_debugfs_root = debugfs_create_dir("f2fs", NULL);
|
||||
|
||||
debugfs_create_file("status", S_IRUGO, f2fs_debugfs_root, NULL,
|
||||
&stat_fops);
|
||||
#endif
|
||||
}
|
||||
|
||||
void f2fs_destroy_root_stats(void)
|
||||
{
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
debugfs_remove_recursive(f2fs_debugfs_root);
|
||||
f2fs_debugfs_root = NULL;
|
||||
#endif
|
||||
}
|
||||
|
|
|
@ -578,6 +578,20 @@ next:
|
|||
goto next;
|
||||
}
|
||||
|
||||
bool f2fs_has_enough_room(struct inode *dir, struct page *ipage,
|
||||
struct fscrypt_name *fname)
|
||||
{
|
||||
struct f2fs_dentry_ptr d;
|
||||
unsigned int bit_pos;
|
||||
int slots = GET_DENTRY_SLOTS(fname_len(fname));
|
||||
|
||||
make_dentry_ptr_inline(dir, &d, inline_data_addr(dir, ipage));
|
||||
|
||||
bit_pos = f2fs_room_for_filename(d.bitmap, slots, d.max);
|
||||
|
||||
return bit_pos < d.max;
|
||||
}
|
||||
|
||||
void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d,
|
||||
const struct qstr *name, f2fs_hash_t name_hash,
|
||||
unsigned int bit_pos)
|
||||
|
@ -1069,24 +1083,27 @@ static int f2fs_d_compare(const struct dentry *dentry, unsigned int len,
|
|||
const char *str, const struct qstr *name)
|
||||
{
|
||||
struct qstr qstr = {.name = str, .len = len };
|
||||
const struct dentry *parent = READ_ONCE(dentry->d_parent);
|
||||
const struct inode *inode = READ_ONCE(parent->d_inode);
|
||||
|
||||
if (!IS_CASEFOLDED(dentry->d_parent->d_inode)) {
|
||||
if (!inode || !IS_CASEFOLDED(inode)) {
|
||||
if (len != name->len)
|
||||
return -1;
|
||||
return memcmp(str, name, len);
|
||||
return memcmp(str, name->name, len);
|
||||
}
|
||||
|
||||
return f2fs_ci_compare(dentry->d_parent->d_inode, name, &qstr, false);
|
||||
return f2fs_ci_compare(inode, name, &qstr, false);
|
||||
}
|
||||
|
||||
static int f2fs_d_hash(const struct dentry *dentry, struct qstr *str)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_SB(dentry->d_sb);
|
||||
const struct unicode_map *um = sbi->s_encoding;
|
||||
const struct inode *inode = READ_ONCE(dentry->d_inode);
|
||||
unsigned char *norm;
|
||||
int len, ret = 0;
|
||||
|
||||
if (!IS_CASEFOLDED(dentry->d_inode))
|
||||
if (!inode || !IS_CASEFOLDED(inode))
|
||||
return 0;
|
||||
|
||||
norm = f2fs_kmalloc(sbi, PATH_MAX, GFP_ATOMIC);
|
||||
|
|
326
fs/f2fs/f2fs.h
326
fs/f2fs/f2fs.h
|
@ -116,6 +116,8 @@ typedef u32 block_t; /*
|
|||
*/
|
||||
typedef u32 nid_t;
|
||||
|
||||
#define COMPRESS_EXT_NUM 16
|
||||
|
||||
struct f2fs_mount_info {
|
||||
unsigned int opt;
|
||||
int write_io_size_bits; /* Write IO size bits */
|
||||
|
@ -140,6 +142,12 @@ struct f2fs_mount_info {
|
|||
block_t unusable_cap; /* Amount of space allowed to be
|
||||
* unusable when disabling checkpoint
|
||||
*/
|
||||
|
||||
/* For compression */
|
||||
unsigned char compress_algorithm; /* algorithm type */
|
||||
unsigned compress_log_size; /* cluster log size */
|
||||
unsigned char compress_ext_cnt; /* extension count */
|
||||
unsigned char extensions[COMPRESS_EXT_NUM][F2FS_EXTENSION_LEN]; /* extensions */
|
||||
};
|
||||
|
||||
#define F2FS_FEATURE_ENCRYPT 0x0001
|
||||
|
@ -155,6 +163,7 @@ struct f2fs_mount_info {
|
|||
#define F2FS_FEATURE_VERITY 0x0400
|
||||
#define F2FS_FEATURE_SB_CHKSUM 0x0800
|
||||
#define F2FS_FEATURE_CASEFOLD 0x1000
|
||||
#define F2FS_FEATURE_COMPRESSION 0x2000
|
||||
|
||||
#define __F2FS_HAS_FEATURE(raw_super, mask) \
|
||||
((raw_super->feature & cpu_to_le32(mask)) != 0)
|
||||
|
@ -712,6 +721,12 @@ struct f2fs_inode_info {
|
|||
int i_inline_xattr_size; /* inline xattr size */
|
||||
struct timespec64 i_crtime; /* inode creation time */
|
||||
struct timespec64 i_disk_time[4];/* inode disk times */
|
||||
|
||||
/* for file compress */
|
||||
u64 i_compr_blocks; /* # of compressed blocks */
|
||||
unsigned char i_compress_algorithm; /* algorithm type */
|
||||
unsigned char i_log_cluster_size; /* log of cluster size */
|
||||
unsigned int i_cluster_size; /* cluster size */
|
||||
};
|
||||
|
||||
static inline void get_extent_info(struct extent_info *ext,
|
||||
|
@ -1018,6 +1033,7 @@ enum need_lock_type {
|
|||
enum cp_reason_type {
|
||||
CP_NO_NEEDED,
|
||||
CP_NON_REGULAR,
|
||||
CP_COMPRESSED,
|
||||
CP_HARDLINK,
|
||||
CP_SB_NEED_CP,
|
||||
CP_WRONG_PINO,
|
||||
|
@ -1056,12 +1072,15 @@ struct f2fs_io_info {
|
|||
block_t old_blkaddr; /* old block address before Cow */
|
||||
struct page *page; /* page to be written */
|
||||
struct page *encrypted_page; /* encrypted page */
|
||||
struct page *compressed_page; /* compressed page */
|
||||
struct list_head list; /* serialize IOs */
|
||||
bool submitted; /* indicate IO submission */
|
||||
int need_lock; /* indicate we need to lock cp_rwsem */
|
||||
bool in_list; /* indicate fio is in io_list */
|
||||
bool is_por; /* indicate IO is from recovery or not */
|
||||
bool retry; /* need to reallocate block address */
|
||||
int compr_blocks; /* # of compressed block addresses */
|
||||
bool encrypted; /* indicate file is encrypted */
|
||||
enum iostat_type io_type; /* io type */
|
||||
struct writeback_control *io_wbc; /* writeback control */
|
||||
struct bio **bio; /* bio for ipu */
|
||||
|
@ -1169,6 +1188,18 @@ enum fsync_mode {
|
|||
FSYNC_MODE_NOBARRIER, /* fsync behaves nobarrier based on posix */
|
||||
};
|
||||
|
||||
/*
|
||||
* this value is set in page as a private data which indicate that
|
||||
* the page is atomically written, and it is in inmem_pages list.
|
||||
*/
|
||||
#define ATOMIC_WRITTEN_PAGE ((unsigned long)-1)
|
||||
#define DUMMY_WRITTEN_PAGE ((unsigned long)-2)
|
||||
|
||||
#define IS_ATOMIC_WRITTEN_PAGE(page) \
|
||||
(page_private(page) == (unsigned long)ATOMIC_WRITTEN_PAGE)
|
||||
#define IS_DUMMY_WRITTEN_PAGE(page) \
|
||||
(page_private(page) == (unsigned long)DUMMY_WRITTEN_PAGE)
|
||||
|
||||
#ifdef CONFIG_FS_ENCRYPTION
|
||||
#define DUMMY_ENCRYPTION_ENABLED(sbi) \
|
||||
(unlikely(F2FS_OPTION(sbi).test_dummy_encryption))
|
||||
|
@ -1176,6 +1207,75 @@ enum fsync_mode {
|
|||
#define DUMMY_ENCRYPTION_ENABLED(sbi) (0)
|
||||
#endif
|
||||
|
||||
/* For compression */
|
||||
enum compress_algorithm_type {
|
||||
COMPRESS_LZO,
|
||||
COMPRESS_LZ4,
|
||||
COMPRESS_MAX,
|
||||
};
|
||||
|
||||
#define COMPRESS_DATA_RESERVED_SIZE 4
|
||||
struct compress_data {
|
||||
__le32 clen; /* compressed data size */
|
||||
__le32 chksum; /* checksum of compressed data */
|
||||
__le32 reserved[COMPRESS_DATA_RESERVED_SIZE]; /* reserved */
|
||||
u8 cdata[]; /* compressed data */
|
||||
};
|
||||
|
||||
#define COMPRESS_HEADER_SIZE (sizeof(struct compress_data))
|
||||
|
||||
#define F2FS_COMPRESSED_PAGE_MAGIC 0xF5F2C000
|
||||
|
||||
/* compress context */
|
||||
struct compress_ctx {
|
||||
struct inode *inode; /* inode the context belong to */
|
||||
pgoff_t cluster_idx; /* cluster index number */
|
||||
unsigned int cluster_size; /* page count in cluster */
|
||||
unsigned int log_cluster_size; /* log of cluster size */
|
||||
struct page **rpages; /* pages store raw data in cluster */
|
||||
unsigned int nr_rpages; /* total page number in rpages */
|
||||
struct page **cpages; /* pages store compressed data in cluster */
|
||||
unsigned int nr_cpages; /* total page number in cpages */
|
||||
void *rbuf; /* virtual mapped address on rpages */
|
||||
struct compress_data *cbuf; /* virtual mapped address on cpages */
|
||||
size_t rlen; /* valid data length in rbuf */
|
||||
size_t clen; /* valid data length in cbuf */
|
||||
void *private; /* payload buffer for specified compression algorithm */
|
||||
};
|
||||
|
||||
/* compress context for write IO path */
|
||||
struct compress_io_ctx {
|
||||
u32 magic; /* magic number to indicate page is compressed */
|
||||
struct inode *inode; /* inode the context belong to */
|
||||
struct page **rpages; /* pages store raw data in cluster */
|
||||
unsigned int nr_rpages; /* total page number in rpages */
|
||||
refcount_t ref; /* referrence count of raw page */
|
||||
};
|
||||
|
||||
/* decompress io context for read IO path */
|
||||
struct decompress_io_ctx {
|
||||
u32 magic; /* magic number to indicate page is compressed */
|
||||
struct inode *inode; /* inode the context belong to */
|
||||
pgoff_t cluster_idx; /* cluster index number */
|
||||
unsigned int cluster_size; /* page count in cluster */
|
||||
unsigned int log_cluster_size; /* log of cluster size */
|
||||
struct page **rpages; /* pages store raw data in cluster */
|
||||
unsigned int nr_rpages; /* total page number in rpages */
|
||||
struct page **cpages; /* pages store compressed data in cluster */
|
||||
unsigned int nr_cpages; /* total page number in cpages */
|
||||
struct page **tpages; /* temp pages to pad holes in cluster */
|
||||
void *rbuf; /* virtual mapped address on rpages */
|
||||
struct compress_data *cbuf; /* virtual mapped address on cpages */
|
||||
size_t rlen; /* valid data length in rbuf */
|
||||
size_t clen; /* valid data length in cbuf */
|
||||
refcount_t ref; /* referrence count of compressed page */
|
||||
bool failed; /* indicate IO error during decompression */
|
||||
};
|
||||
|
||||
#define NULL_CLUSTER ((unsigned int)(~0))
|
||||
#define MIN_COMPRESS_LOG_SIZE 2
|
||||
#define MAX_COMPRESS_LOG_SIZE 8
|
||||
|
||||
struct f2fs_sb_info {
|
||||
struct super_block *sb; /* pointer to VFS super block */
|
||||
struct proc_dir_entry *s_proc; /* proc entry */
|
||||
|
@ -1291,7 +1391,10 @@ struct f2fs_sb_info {
|
|||
struct f2fs_mount_info mount_opt; /* mount options */
|
||||
|
||||
/* for cleaning operations */
|
||||
struct mutex gc_mutex; /* mutex for GC */
|
||||
struct rw_semaphore gc_lock; /*
|
||||
* semaphore for GC, avoid
|
||||
* race between GC and GC or CP
|
||||
*/
|
||||
struct f2fs_gc_kthread *gc_thread; /* GC thread */
|
||||
unsigned int cur_victim_sec; /* current victim section num */
|
||||
unsigned int gc_mode; /* current GC state */
|
||||
|
@ -1327,11 +1430,11 @@ struct f2fs_sb_info {
|
|||
atomic_t inline_xattr; /* # of inline_xattr inodes */
|
||||
atomic_t inline_inode; /* # of inline_data inodes */
|
||||
atomic_t inline_dir; /* # of inline_dentry inodes */
|
||||
atomic_t aw_cnt; /* # of atomic writes */
|
||||
atomic_t compr_inode; /* # of compressed inodes */
|
||||
atomic_t compr_blocks; /* # of compressed blocks */
|
||||
atomic_t vw_cnt; /* # of volatile writes */
|
||||
atomic_t max_aw_cnt; /* max # of atomic writes */
|
||||
atomic_t max_vw_cnt; /* max # of volatile writes */
|
||||
int bg_gc; /* background gc calls */
|
||||
unsigned int io_skip_bggc; /* skip background gc for in-flight IO */
|
||||
unsigned int other_skip_bggc; /* skip background gc for other reasons */
|
||||
unsigned int ndirty_inode[NR_INODE_TYPE]; /* # of dirty inodes */
|
||||
|
@ -1365,6 +1468,8 @@ struct f2fs_sb_info {
|
|||
|
||||
/* Precomputed FS UUID checksum for seeding other checksums */
|
||||
__u32 s_chksum_seed;
|
||||
|
||||
struct workqueue_struct *post_read_wq; /* post read workqueue */
|
||||
};
|
||||
|
||||
struct f2fs_private_dio {
|
||||
|
@ -2222,26 +2327,6 @@ static inline void *f2fs_kmem_cache_alloc(struct kmem_cache *cachep,
|
|||
return entry;
|
||||
}
|
||||
|
||||
static inline struct bio *f2fs_bio_alloc(struct f2fs_sb_info *sbi,
|
||||
int npages, bool no_fail)
|
||||
{
|
||||
struct bio *bio;
|
||||
|
||||
if (no_fail) {
|
||||
/* No failure on bio allocation */
|
||||
bio = bio_alloc(GFP_NOIO, npages);
|
||||
if (!bio)
|
||||
bio = bio_alloc(GFP_NOIO | __GFP_NOFAIL, npages);
|
||||
return bio;
|
||||
}
|
||||
if (time_to_inject(sbi, FAULT_ALLOC_BIO)) {
|
||||
f2fs_show_injection_info(sbi, FAULT_ALLOC_BIO);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return bio_alloc(GFP_KERNEL, npages);
|
||||
}
|
||||
|
||||
static inline bool is_idle(struct f2fs_sb_info *sbi, int type)
|
||||
{
|
||||
if (sbi->gc_mode == GC_URGENT)
|
||||
|
@ -2378,11 +2463,13 @@ static inline void f2fs_change_bit(unsigned int nr, char *addr)
|
|||
/*
|
||||
* On-disk inode flags (f2fs_inode::i_flags)
|
||||
*/
|
||||
#define F2FS_COMPR_FL 0x00000004 /* Compress file */
|
||||
#define F2FS_SYNC_FL 0x00000008 /* Synchronous updates */
|
||||
#define F2FS_IMMUTABLE_FL 0x00000010 /* Immutable file */
|
||||
#define F2FS_APPEND_FL 0x00000020 /* writes to file may only append */
|
||||
#define F2FS_NODUMP_FL 0x00000040 /* do not dump file */
|
||||
#define F2FS_NOATIME_FL 0x00000080 /* do not update atime */
|
||||
#define F2FS_NOCOMP_FL 0x00000400 /* Don't compress */
|
||||
#define F2FS_INDEX_FL 0x00001000 /* hash-indexed directory */
|
||||
#define F2FS_DIRSYNC_FL 0x00010000 /* dirsync behaviour (directories only) */
|
||||
#define F2FS_PROJINHERIT_FL 0x20000000 /* Create with parents projid */
|
||||
|
@ -2391,7 +2478,7 @@ static inline void f2fs_change_bit(unsigned int nr, char *addr)
|
|||
/* Flags that should be inherited by new inodes from their parent. */
|
||||
#define F2FS_FL_INHERITED (F2FS_SYNC_FL | F2FS_NODUMP_FL | F2FS_NOATIME_FL | \
|
||||
F2FS_DIRSYNC_FL | F2FS_PROJINHERIT_FL | \
|
||||
F2FS_CASEFOLD_FL)
|
||||
F2FS_CASEFOLD_FL | F2FS_COMPR_FL | F2FS_NOCOMP_FL)
|
||||
|
||||
/* Flags that are appropriate for regular files (all but dir-specific ones). */
|
||||
#define F2FS_REG_FLMASK (~(F2FS_DIRSYNC_FL | F2FS_PROJINHERIT_FL | \
|
||||
|
@ -2443,6 +2530,8 @@ enum {
|
|||
FI_PIN_FILE, /* indicate file should not be gced */
|
||||
FI_ATOMIC_REVOKE_REQUEST, /* request to drop atomic data */
|
||||
FI_VERITY_IN_PROGRESS, /* building fs-verity Merkle tree */
|
||||
FI_COMPRESSED_FILE, /* indicate file's data can be compressed */
|
||||
FI_MMAP_FILE, /* indicate file was mmapped */
|
||||
};
|
||||
|
||||
static inline void __mark_inode_dirty_flag(struct inode *inode,
|
||||
|
@ -2459,6 +2548,7 @@ static inline void __mark_inode_dirty_flag(struct inode *inode,
|
|||
case FI_DATA_EXIST:
|
||||
case FI_INLINE_DOTS:
|
||||
case FI_PIN_FILE:
|
||||
case FI_COMPRESSED_FILE:
|
||||
f2fs_mark_inode_dirty_sync(inode, true);
|
||||
}
|
||||
}
|
||||
|
@ -2614,16 +2704,27 @@ static inline int f2fs_has_inline_xattr(struct inode *inode)
|
|||
return is_inode_flag_set(inode, FI_INLINE_XATTR);
|
||||
}
|
||||
|
||||
static inline int f2fs_compressed_file(struct inode *inode)
|
||||
{
|
||||
return S_ISREG(inode->i_mode) &&
|
||||
is_inode_flag_set(inode, FI_COMPRESSED_FILE);
|
||||
}
|
||||
|
||||
static inline unsigned int addrs_per_inode(struct inode *inode)
|
||||
{
|
||||
unsigned int addrs = CUR_ADDRS_PER_INODE(inode) -
|
||||
get_inline_xattr_addrs(inode);
|
||||
return ALIGN_DOWN(addrs, 1);
|
||||
|
||||
if (!f2fs_compressed_file(inode))
|
||||
return addrs;
|
||||
return ALIGN_DOWN(addrs, F2FS_I(inode)->i_cluster_size);
|
||||
}
|
||||
|
||||
static inline unsigned int addrs_per_block(struct inode *inode)
|
||||
{
|
||||
return ALIGN_DOWN(DEF_ADDRS_PER_BLOCK, 1);
|
||||
if (!f2fs_compressed_file(inode))
|
||||
return DEF_ADDRS_PER_BLOCK;
|
||||
return ALIGN_DOWN(DEF_ADDRS_PER_BLOCK, F2FS_I(inode)->i_cluster_size);
|
||||
}
|
||||
|
||||
static inline void *inline_xattr_addr(struct inode *inode, struct page *page)
|
||||
|
@ -2656,6 +2757,11 @@ static inline int f2fs_has_inline_dots(struct inode *inode)
|
|||
return is_inode_flag_set(inode, FI_INLINE_DOTS);
|
||||
}
|
||||
|
||||
static inline int f2fs_is_mmap_file(struct inode *inode)
|
||||
{
|
||||
return is_inode_flag_set(inode, FI_MMAP_FILE);
|
||||
}
|
||||
|
||||
static inline bool f2fs_is_pinned_file(struct inode *inode)
|
||||
{
|
||||
return is_inode_flag_set(inode, FI_PIN_FILE);
|
||||
|
@ -2783,7 +2889,8 @@ static inline bool f2fs_may_extent_tree(struct inode *inode)
|
|||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
|
||||
if (!test_opt(sbi, EXTENT_CACHE) ||
|
||||
is_inode_flag_set(inode, FI_NO_EXTENT))
|
||||
is_inode_flag_set(inode, FI_NO_EXTENT) ||
|
||||
is_inode_flag_set(inode, FI_COMPRESSED_FILE))
|
||||
return false;
|
||||
|
||||
/*
|
||||
|
@ -2903,7 +3010,8 @@ static inline void verify_blkaddr(struct f2fs_sb_info *sbi,
|
|||
|
||||
static inline bool __is_valid_data_blkaddr(block_t blkaddr)
|
||||
{
|
||||
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR)
|
||||
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR ||
|
||||
blkaddr == COMPRESS_ADDR)
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
@ -3001,6 +3109,8 @@ ino_t f2fs_inode_by_name(struct inode *dir, const struct qstr *qstr,
|
|||
struct page **page);
|
||||
void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de,
|
||||
struct page *page, struct inode *inode);
|
||||
bool f2fs_has_enough_room(struct inode *dir, struct page *ipage,
|
||||
struct fscrypt_name *fname);
|
||||
void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d,
|
||||
const struct qstr *name, f2fs_hash_t name_hash,
|
||||
unsigned int bit_pos);
|
||||
|
@ -3155,6 +3265,8 @@ void f2fs_write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
|
|||
int f2fs_lookup_journal_in_cursum(struct f2fs_journal *journal, int type,
|
||||
unsigned int val, int alloc);
|
||||
void f2fs_flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
|
||||
int f2fs_fix_curseg_write_pointer(struct f2fs_sb_info *sbi);
|
||||
int f2fs_check_write_pointer(struct f2fs_sb_info *sbi);
|
||||
int f2fs_build_segment_manager(struct f2fs_sb_info *sbi);
|
||||
void f2fs_destroy_segment_manager(struct f2fs_sb_info *sbi);
|
||||
int __init f2fs_create_segment_manager_caches(void);
|
||||
|
@ -3205,10 +3317,13 @@ void f2fs_destroy_checkpoint_caches(void);
|
|||
/*
|
||||
* data.c
|
||||
*/
|
||||
int f2fs_init_post_read_processing(void);
|
||||
void f2fs_destroy_post_read_processing(void);
|
||||
int __init f2fs_init_bioset(void);
|
||||
void f2fs_destroy_bioset(void);
|
||||
struct bio *f2fs_bio_alloc(struct f2fs_sb_info *sbi, int npages, bool no_fail);
|
||||
int f2fs_init_bio_entry_cache(void);
|
||||
void f2fs_destroy_bio_entry_cache(void);
|
||||
void f2fs_submit_bio(struct f2fs_sb_info *sbi,
|
||||
struct bio *bio, enum page_type type);
|
||||
void f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type);
|
||||
void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi,
|
||||
struct inode *inode, struct page *page,
|
||||
|
@ -3245,8 +3360,14 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
|
|||
int create, int flag);
|
||||
int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
|
||||
u64 start, u64 len);
|
||||
int f2fs_encrypt_one_page(struct f2fs_io_info *fio);
|
||||
bool f2fs_should_update_inplace(struct inode *inode, struct f2fs_io_info *fio);
|
||||
bool f2fs_should_update_outplace(struct inode *inode, struct f2fs_io_info *fio);
|
||||
int f2fs_write_single_data_page(struct page *page, int *submitted,
|
||||
struct bio **bio, sector_t *last_block,
|
||||
struct writeback_control *wbc,
|
||||
enum iostat_type io_type,
|
||||
int compr_blocks);
|
||||
void f2fs_invalidate_page(struct page *page, unsigned int offset,
|
||||
unsigned int length);
|
||||
int f2fs_release_page(struct page *page, gfp_t wait);
|
||||
|
@ -3256,6 +3377,10 @@ int f2fs_migrate_page(struct address_space *mapping, struct page *newpage,
|
|||
#endif
|
||||
bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len);
|
||||
void f2fs_clear_page_cache_dirty_tag(struct page *page);
|
||||
int f2fs_init_post_read_processing(void);
|
||||
void f2fs_destroy_post_read_processing(void);
|
||||
int f2fs_init_post_read_wq(struct f2fs_sb_info *sbi);
|
||||
void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi);
|
||||
|
||||
/*
|
||||
* gc.c
|
||||
|
@ -3302,6 +3427,7 @@ struct f2fs_stat_info {
|
|||
int nr_discard_cmd;
|
||||
unsigned int undiscard_blks;
|
||||
int inline_xattr, inline_inode, inline_dir, append, update, orphans;
|
||||
int compr_inode, compr_blocks;
|
||||
int aw_cnt, max_aw_cnt, vw_cnt, max_vw_cnt;
|
||||
unsigned int valid_count, valid_node_count, valid_inode_count, discard_blks;
|
||||
unsigned int bimodal, avg_vblocks;
|
||||
|
@ -3333,7 +3459,7 @@ static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
|
|||
#define stat_inc_cp_count(si) ((si)->cp_count++)
|
||||
#define stat_inc_bg_cp_count(si) ((si)->bg_cp_count++)
|
||||
#define stat_inc_call_count(si) ((si)->call_count++)
|
||||
#define stat_inc_bggc_count(sbi) ((sbi)->bg_gc++)
|
||||
#define stat_inc_bggc_count(si) ((si)->bg_gc++)
|
||||
#define stat_io_skip_bggc_count(sbi) ((sbi)->io_skip_bggc++)
|
||||
#define stat_other_skip_bggc_count(sbi) ((sbi)->other_skip_bggc++)
|
||||
#define stat_inc_dirty_inode(sbi, type) ((sbi)->ndirty_inode[type]++)
|
||||
|
@ -3372,6 +3498,20 @@ static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
|
|||
if (f2fs_has_inline_dentry(inode)) \
|
||||
(atomic_dec(&F2FS_I_SB(inode)->inline_dir)); \
|
||||
} while (0)
|
||||
#define stat_inc_compr_inode(inode) \
|
||||
do { \
|
||||
if (f2fs_compressed_file(inode)) \
|
||||
(atomic_inc(&F2FS_I_SB(inode)->compr_inode)); \
|
||||
} while (0)
|
||||
#define stat_dec_compr_inode(inode) \
|
||||
do { \
|
||||
if (f2fs_compressed_file(inode)) \
|
||||
(atomic_dec(&F2FS_I_SB(inode)->compr_inode)); \
|
||||
} while (0)
|
||||
#define stat_add_compr_blocks(inode, blocks) \
|
||||
(atomic_add(blocks, &F2FS_I_SB(inode)->compr_blocks))
|
||||
#define stat_sub_compr_blocks(inode, blocks) \
|
||||
(atomic_sub(blocks, &F2FS_I_SB(inode)->compr_blocks))
|
||||
#define stat_inc_meta_count(sbi, blkaddr) \
|
||||
do { \
|
||||
if (blkaddr < SIT_I(sbi)->sit_base_addr) \
|
||||
|
@ -3389,13 +3529,9 @@ static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
|
|||
((sbi)->block_count[(curseg)->alloc_type]++)
|
||||
#define stat_inc_inplace_blocks(sbi) \
|
||||
(atomic_inc(&(sbi)->inplace_count))
|
||||
#define stat_inc_atomic_write(inode) \
|
||||
(atomic_inc(&F2FS_I_SB(inode)->aw_cnt))
|
||||
#define stat_dec_atomic_write(inode) \
|
||||
(atomic_dec(&F2FS_I_SB(inode)->aw_cnt))
|
||||
#define stat_update_max_atomic_write(inode) \
|
||||
do { \
|
||||
int cur = atomic_read(&F2FS_I_SB(inode)->aw_cnt); \
|
||||
int cur = F2FS_I_SB(inode)->atomic_files; \
|
||||
int max = atomic_read(&F2FS_I_SB(inode)->max_aw_cnt); \
|
||||
if (cur > max) \
|
||||
atomic_set(&F2FS_I_SB(inode)->max_aw_cnt, cur); \
|
||||
|
@ -3447,6 +3583,7 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi);
|
|||
void f2fs_destroy_stats(struct f2fs_sb_info *sbi);
|
||||
void __init f2fs_create_root_stats(void);
|
||||
void f2fs_destroy_root_stats(void);
|
||||
void f2fs_update_sit_info(struct f2fs_sb_info *sbi);
|
||||
#else
|
||||
#define stat_inc_cp_count(si) do { } while (0)
|
||||
#define stat_inc_bg_cp_count(si) do { } while (0)
|
||||
|
@ -3456,8 +3593,8 @@ void f2fs_destroy_root_stats(void);
|
|||
#define stat_other_skip_bggc_count(sbi) do { } while (0)
|
||||
#define stat_inc_dirty_inode(sbi, type) do { } while (0)
|
||||
#define stat_dec_dirty_inode(sbi, type) do { } while (0)
|
||||
#define stat_inc_total_hit(sb) do { } while (0)
|
||||
#define stat_inc_rbtree_node_hit(sb) do { } while (0)
|
||||
#define stat_inc_total_hit(sbi) do { } while (0)
|
||||
#define stat_inc_rbtree_node_hit(sbi) do { } while (0)
|
||||
#define stat_inc_largest_node_hit(sbi) do { } while (0)
|
||||
#define stat_inc_cached_node_hit(sbi) do { } while (0)
|
||||
#define stat_inc_inline_xattr(inode) do { } while (0)
|
||||
|
@ -3466,6 +3603,10 @@ void f2fs_destroy_root_stats(void);
|
|||
#define stat_dec_inline_inode(inode) do { } while (0)
|
||||
#define stat_inc_inline_dir(inode) do { } while (0)
|
||||
#define stat_dec_inline_dir(inode) do { } while (0)
|
||||
#define stat_inc_compr_inode(inode) do { } while (0)
|
||||
#define stat_dec_compr_inode(inode) do { } while (0)
|
||||
#define stat_add_compr_blocks(inode, blocks) do { } while (0)
|
||||
#define stat_sub_compr_blocks(inode, blocks) do { } while (0)
|
||||
#define stat_inc_atomic_write(inode) do { } while (0)
|
||||
#define stat_dec_atomic_write(inode) do { } while (0)
|
||||
#define stat_update_max_atomic_write(inode) do { } while (0)
|
||||
|
@ -3485,6 +3626,7 @@ static inline int f2fs_build_stats(struct f2fs_sb_info *sbi) { return 0; }
|
|||
static inline void f2fs_destroy_stats(struct f2fs_sb_info *sbi) { }
|
||||
static inline void __init f2fs_create_root_stats(void) { }
|
||||
static inline void f2fs_destroy_root_stats(void) { }
|
||||
static inline void update_sit_info(struct f2fs_sb_info *sbi) {}
|
||||
#endif
|
||||
|
||||
extern const struct file_operations f2fs_dir_operations;
|
||||
|
@ -3513,6 +3655,7 @@ void f2fs_truncate_inline_inode(struct inode *inode,
|
|||
int f2fs_read_inline_data(struct inode *inode, struct page *page);
|
||||
int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page);
|
||||
int f2fs_convert_inline_inode(struct inode *inode);
|
||||
int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry);
|
||||
int f2fs_write_inline_data(struct inode *inode, struct page *page);
|
||||
bool f2fs_recover_inline_data(struct inode *inode, struct page *npage);
|
||||
struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
|
||||
|
@ -3605,7 +3748,85 @@ static inline void f2fs_set_encrypted_inode(struct inode *inode)
|
|||
*/
|
||||
static inline bool f2fs_post_read_required(struct inode *inode)
|
||||
{
|
||||
return f2fs_encrypted_file(inode) || fsverity_active(inode);
|
||||
return f2fs_encrypted_file(inode) || fsverity_active(inode) ||
|
||||
f2fs_compressed_file(inode);
|
||||
}
|
||||
|
||||
/*
|
||||
* compress.c
|
||||
*/
|
||||
#ifdef CONFIG_F2FS_FS_COMPRESSION
|
||||
bool f2fs_is_compressed_page(struct page *page);
|
||||
struct page *f2fs_compress_control_page(struct page *page);
|
||||
int f2fs_prepare_compress_overwrite(struct inode *inode,
|
||||
struct page **pagep, pgoff_t index, void **fsdata);
|
||||
bool f2fs_compress_write_end(struct inode *inode, void *fsdata,
|
||||
pgoff_t index, unsigned copied);
|
||||
void f2fs_compress_write_end_io(struct bio *bio, struct page *page);
|
||||
bool f2fs_is_compress_backend_ready(struct inode *inode);
|
||||
void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity);
|
||||
bool f2fs_cluster_is_empty(struct compress_ctx *cc);
|
||||
bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index);
|
||||
void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page);
|
||||
int f2fs_write_multi_pages(struct compress_ctx *cc,
|
||||
int *submitted,
|
||||
struct writeback_control *wbc,
|
||||
enum iostat_type io_type);
|
||||
int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index);
|
||||
int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
|
||||
unsigned nr_pages, sector_t *last_block_in_bio,
|
||||
bool is_readahead);
|
||||
struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc);
|
||||
void f2fs_free_dic(struct decompress_io_ctx *dic);
|
||||
void f2fs_decompress_end_io(struct page **rpages,
|
||||
unsigned int cluster_size, bool err, bool verity);
|
||||
int f2fs_init_compress_ctx(struct compress_ctx *cc);
|
||||
void f2fs_destroy_compress_ctx(struct compress_ctx *cc);
|
||||
void f2fs_init_compress_info(struct f2fs_sb_info *sbi);
|
||||
#else
|
||||
static inline bool f2fs_is_compressed_page(struct page *page) { return false; }
|
||||
static inline bool f2fs_is_compress_backend_ready(struct inode *inode)
|
||||
{
|
||||
if (!f2fs_compressed_file(inode))
|
||||
return true;
|
||||
/* not support compression */
|
||||
return false;
|
||||
}
|
||||
static inline struct page *f2fs_compress_control_page(struct page *page)
|
||||
{
|
||||
WARN_ON_ONCE(1);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline void set_compress_context(struct inode *inode)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
|
||||
F2FS_I(inode)->i_compress_algorithm =
|
||||
F2FS_OPTION(sbi).compress_algorithm;
|
||||
F2FS_I(inode)->i_log_cluster_size =
|
||||
F2FS_OPTION(sbi).compress_log_size;
|
||||
F2FS_I(inode)->i_cluster_size =
|
||||
1 << F2FS_I(inode)->i_log_cluster_size;
|
||||
F2FS_I(inode)->i_flags |= F2FS_COMPR_FL;
|
||||
set_inode_flag(inode, FI_COMPRESSED_FILE);
|
||||
stat_inc_compr_inode(inode);
|
||||
}
|
||||
|
||||
static inline u64 f2fs_disable_compressed_file(struct inode *inode)
|
||||
{
|
||||
struct f2fs_inode_info *fi = F2FS_I(inode);
|
||||
|
||||
if (!f2fs_compressed_file(inode))
|
||||
return 0;
|
||||
if (fi->i_compr_blocks)
|
||||
return fi->i_compr_blocks;
|
||||
|
||||
fi->i_flags &= ~F2FS_COMPR_FL;
|
||||
clear_inode_flag(inode, FI_COMPRESSED_FILE);
|
||||
stat_dec_compr_inode(inode);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define F2FS_FEATURE_FUNCS(name, flagname) \
|
||||
|
@ -3626,6 +3847,7 @@ F2FS_FEATURE_FUNCS(lost_found, LOST_FOUND);
|
|||
F2FS_FEATURE_FUNCS(verity, VERITY);
|
||||
F2FS_FEATURE_FUNCS(sb_chksum, SB_CHKSUM);
|
||||
F2FS_FEATURE_FUNCS(casefold, CASEFOLD);
|
||||
F2FS_FEATURE_FUNCS(compression, COMPRESSION);
|
||||
|
||||
#ifdef CONFIG_BLK_DEV_ZONED
|
||||
static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
|
||||
|
@ -3707,6 +3929,30 @@ static inline bool f2fs_may_encrypt(struct inode *inode)
|
|||
#endif
|
||||
}
|
||||
|
||||
static inline bool f2fs_may_compress(struct inode *inode)
|
||||
{
|
||||
if (IS_SWAPFILE(inode) || f2fs_is_pinned_file(inode) ||
|
||||
f2fs_is_atomic_file(inode) ||
|
||||
f2fs_is_volatile_file(inode))
|
||||
return false;
|
||||
return S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode);
|
||||
}
|
||||
|
||||
static inline void f2fs_i_compr_blocks_update(struct inode *inode,
|
||||
u64 blocks, bool add)
|
||||
{
|
||||
int diff = F2FS_I(inode)->i_cluster_size - blocks;
|
||||
|
||||
if (add) {
|
||||
F2FS_I(inode)->i_compr_blocks += diff;
|
||||
stat_add_compr_blocks(inode, diff);
|
||||
} else {
|
||||
F2FS_I(inode)->i_compr_blocks -= diff;
|
||||
stat_sub_compr_blocks(inode, diff);
|
||||
}
|
||||
f2fs_mark_inode_dirty_sync(inode, true);
|
||||
}
|
||||
|
||||
static inline int block_unaligned_IO(struct inode *inode,
|
||||
struct kiocb *iocb, struct iov_iter *iter)
|
||||
{
|
||||
|
@ -3738,6 +3984,8 @@ static inline bool f2fs_force_buffered_io(struct inode *inode,
|
|||
return true;
|
||||
if (f2fs_is_multi_device(sbi))
|
||||
return true;
|
||||
if (f2fs_compressed_file(inode))
|
||||
return true;
|
||||
/*
|
||||
* for blkzoned device, fallback direct IO to buffered IO, so
|
||||
* all IOs can be serialized by log-structured write.
|
||||
|
|
253
fs/f2fs/file.c
253
fs/f2fs/file.c
|
@ -50,8 +50,9 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
|
|||
struct page *page = vmf->page;
|
||||
struct inode *inode = file_inode(vmf->vma->vm_file);
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct dnode_of_data dn = { .node_changed = false };
|
||||
int err;
|
||||
struct dnode_of_data dn;
|
||||
bool need_alloc = true;
|
||||
int err = 0;
|
||||
|
||||
if (unlikely(f2fs_cp_error(sbi))) {
|
||||
err = -EIO;
|
||||
|
@ -63,6 +64,26 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
|
|||
goto err;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_F2FS_FS_COMPRESSION
|
||||
if (f2fs_compressed_file(inode)) {
|
||||
int ret = f2fs_is_compressed_cluster(inode, page->index);
|
||||
|
||||
if (ret < 0) {
|
||||
err = ret;
|
||||
goto err;
|
||||
} else if (ret) {
|
||||
if (ret < F2FS_I(inode)->i_cluster_size) {
|
||||
err = -EAGAIN;
|
||||
goto err;
|
||||
}
|
||||
need_alloc = false;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
/* should do out of any locked page */
|
||||
if (need_alloc)
|
||||
f2fs_balance_fs(sbi, true);
|
||||
|
||||
sb_start_pagefault(inode->i_sb);
|
||||
|
||||
f2fs_bug_on(sbi, f2fs_has_inline_data(inode));
|
||||
|
@ -78,15 +99,17 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
|
|||
goto out_sem;
|
||||
}
|
||||
|
||||
/* block allocation */
|
||||
__do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true);
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
err = f2fs_get_block(&dn, page->index);
|
||||
f2fs_put_dnode(&dn);
|
||||
__do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false);
|
||||
if (err) {
|
||||
unlock_page(page);
|
||||
goto out_sem;
|
||||
if (need_alloc) {
|
||||
/* block allocation */
|
||||
__do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true);
|
||||
set_new_dnode(&dn, inode, NULL, NULL, 0);
|
||||
err = f2fs_get_block(&dn, page->index);
|
||||
f2fs_put_dnode(&dn);
|
||||
__do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false);
|
||||
if (err) {
|
||||
unlock_page(page);
|
||||
goto out_sem;
|
||||
}
|
||||
}
|
||||
|
||||
/* fill the page */
|
||||
|
@ -120,8 +143,6 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf)
|
|||
out_sem:
|
||||
up_read(&F2FS_I(inode)->i_mmap_sem);
|
||||
|
||||
f2fs_balance_fs(sbi, dn.node_changed);
|
||||
|
||||
sb_end_pagefault(inode->i_sb);
|
||||
err:
|
||||
return block_page_mkwrite_return(err);
|
||||
|
@ -155,6 +176,8 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode)
|
|||
|
||||
if (!S_ISREG(inode->i_mode))
|
||||
cp_reason = CP_NON_REGULAR;
|
||||
else if (f2fs_compressed_file(inode))
|
||||
cp_reason = CP_COMPRESSED;
|
||||
else if (inode->i_nlink != 1)
|
||||
cp_reason = CP_HARDLINK;
|
||||
else if (is_sbi_flag_set(sbi, SBI_NEED_CP))
|
||||
|
@ -485,6 +508,9 @@ static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma)
|
|||
if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
|
||||
return -EIO;
|
||||
|
||||
if (!f2fs_is_compress_backend_ready(inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* we don't need to use inline_data strictly */
|
||||
err = f2fs_convert_inline_inode(inode);
|
||||
if (err)
|
||||
|
@ -492,6 +518,7 @@ static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma)
|
|||
|
||||
file_accessed(file);
|
||||
vma->vm_ops = &f2fs_file_vm_ops;
|
||||
set_inode_flag(inode, FI_MMAP_FILE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -502,6 +529,9 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
if (!f2fs_is_compress_backend_ready(inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
err = fsverity_file_open(inode, filp);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -518,6 +548,9 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
|
|||
int nr_free = 0, ofs = dn->ofs_in_node, len = count;
|
||||
__le32 *addr;
|
||||
int base = 0;
|
||||
bool compressed_cluster = false;
|
||||
int cluster_index = 0, valid_blocks = 0;
|
||||
int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
|
||||
|
||||
if (IS_INODE(dn->node_page) && f2fs_has_extra_attr(dn->inode))
|
||||
base = get_extra_isize(dn->inode);
|
||||
|
@ -525,26 +558,43 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
|
|||
raw_node = F2FS_NODE(dn->node_page);
|
||||
addr = blkaddr_in_node(raw_node) + base + ofs;
|
||||
|
||||
for (; count > 0; count--, addr++, dn->ofs_in_node++) {
|
||||
/* Assumption: truncateion starts with cluster */
|
||||
for (; count > 0; count--, addr++, dn->ofs_in_node++, cluster_index++) {
|
||||
block_t blkaddr = le32_to_cpu(*addr);
|
||||
|
||||
if (f2fs_compressed_file(dn->inode) &&
|
||||
!(cluster_index & (cluster_size - 1))) {
|
||||
if (compressed_cluster)
|
||||
f2fs_i_compr_blocks_update(dn->inode,
|
||||
valid_blocks, false);
|
||||
compressed_cluster = (blkaddr == COMPRESS_ADDR);
|
||||
valid_blocks = 0;
|
||||
}
|
||||
|
||||
if (blkaddr == NULL_ADDR)
|
||||
continue;
|
||||
|
||||
dn->data_blkaddr = NULL_ADDR;
|
||||
f2fs_set_data_blkaddr(dn);
|
||||
|
||||
if (__is_valid_data_blkaddr(blkaddr) &&
|
||||
!f2fs_is_valid_blkaddr(sbi, blkaddr,
|
||||
if (__is_valid_data_blkaddr(blkaddr)) {
|
||||
if (!f2fs_is_valid_blkaddr(sbi, blkaddr,
|
||||
DATA_GENERIC_ENHANCE))
|
||||
continue;
|
||||
continue;
|
||||
if (compressed_cluster)
|
||||
valid_blocks++;
|
||||
}
|
||||
|
||||
f2fs_invalidate_blocks(sbi, blkaddr);
|
||||
if (dn->ofs_in_node == 0 && IS_INODE(dn->node_page))
|
||||
clear_inode_flag(dn->inode, FI_FIRST_BLOCK_WRITTEN);
|
||||
|
||||
f2fs_invalidate_blocks(sbi, blkaddr);
|
||||
nr_free++;
|
||||
}
|
||||
|
||||
if (compressed_cluster)
|
||||
f2fs_i_compr_blocks_update(dn->inode, valid_blocks, false);
|
||||
|
||||
if (nr_free) {
|
||||
pgoff_t fofs;
|
||||
/*
|
||||
|
@ -587,6 +637,9 @@ static int truncate_partial_data_page(struct inode *inode, u64 from,
|
|||
return 0;
|
||||
}
|
||||
|
||||
if (f2fs_compressed_file(inode))
|
||||
return 0;
|
||||
|
||||
page = f2fs_get_lock_data_page(inode, index, true);
|
||||
if (IS_ERR(page))
|
||||
return PTR_ERR(page) == -ENOENT ? 0 : PTR_ERR(page);
|
||||
|
@ -602,7 +655,7 @@ truncate_out:
|
|||
return 0;
|
||||
}
|
||||
|
||||
int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock)
|
||||
static int do_truncate_blocks(struct inode *inode, u64 from, bool lock)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct dnode_of_data dn;
|
||||
|
@ -667,6 +720,28 @@ free_partial:
|
|||
return err;
|
||||
}
|
||||
|
||||
int f2fs_truncate_blocks(struct inode *inode, u64 from, bool lock)
|
||||
{
|
||||
u64 free_from = from;
|
||||
|
||||
/*
|
||||
* for compressed file, only support cluster size
|
||||
* aligned truncation.
|
||||
*/
|
||||
if (f2fs_compressed_file(inode)) {
|
||||
size_t cluster_shift = PAGE_SHIFT +
|
||||
F2FS_I(inode)->i_log_cluster_size;
|
||||
size_t cluster_mask = (1 << cluster_shift) - 1;
|
||||
|
||||
free_from = from >> cluster_shift;
|
||||
if (from & cluster_mask)
|
||||
free_from++;
|
||||
free_from <<= cluster_shift;
|
||||
}
|
||||
|
||||
return do_truncate_blocks(inode, free_from, lock);
|
||||
}
|
||||
|
||||
int f2fs_truncate(struct inode *inode)
|
||||
{
|
||||
int err;
|
||||
|
@ -786,6 +861,10 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
|
|||
if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
|
||||
return -EIO;
|
||||
|
||||
if ((attr->ia_valid & ATTR_SIZE) &&
|
||||
!f2fs_is_compress_backend_ready(inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
err = setattr_prepare(dentry, attr);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -1026,8 +1105,8 @@ next_dnode:
|
|||
} else if (ret == -ENOENT) {
|
||||
if (dn.max_level == 0)
|
||||
return -ENOENT;
|
||||
done = min((pgoff_t)ADDRS_PER_BLOCK(inode) - dn.ofs_in_node,
|
||||
len);
|
||||
done = min((pgoff_t)ADDRS_PER_BLOCK(inode) -
|
||||
dn.ofs_in_node, len);
|
||||
blkaddr += done;
|
||||
do_replace += done;
|
||||
goto next;
|
||||
|
@ -1190,13 +1269,13 @@ static int __exchange_data_block(struct inode *src_inode,
|
|||
|
||||
src_blkaddr = f2fs_kvzalloc(F2FS_I_SB(src_inode),
|
||||
array_size(olen, sizeof(block_t)),
|
||||
GFP_KERNEL);
|
||||
GFP_NOFS);
|
||||
if (!src_blkaddr)
|
||||
return -ENOMEM;
|
||||
|
||||
do_replace = f2fs_kvzalloc(F2FS_I_SB(src_inode),
|
||||
array_size(olen, sizeof(int)),
|
||||
GFP_KERNEL);
|
||||
GFP_NOFS);
|
||||
if (!do_replace) {
|
||||
kvfree(src_blkaddr);
|
||||
return -ENOMEM;
|
||||
|
@ -1563,7 +1642,7 @@ static int expand_inode_data(struct inode *inode, loff_t offset,
|
|||
next_alloc:
|
||||
if (has_not_enough_free_secs(sbi, 0,
|
||||
GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) {
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
err = f2fs_gc(sbi, true, false, NULL_SEGNO);
|
||||
if (err && err != -ENODATA && err != -EAGAIN)
|
||||
goto out_err;
|
||||
|
@ -1621,6 +1700,8 @@ static long f2fs_fallocate(struct file *file, int mode,
|
|||
return -EIO;
|
||||
if (!f2fs_is_checkpoint_ready(F2FS_I_SB(inode)))
|
||||
return -ENOSPC;
|
||||
if (!f2fs_is_compress_backend_ready(inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* f2fs only support ->fallocate for regular file */
|
||||
if (!S_ISREG(inode->i_mode))
|
||||
|
@ -1630,6 +1711,11 @@ static long f2fs_fallocate(struct file *file, int mode,
|
|||
(mode & (FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_INSERT_RANGE)))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (f2fs_compressed_file(inode) &&
|
||||
(mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE |
|
||||
FALLOC_FL_ZERO_RANGE | FALLOC_FL_INSERT_RANGE)))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE |
|
||||
FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_ZERO_RANGE |
|
||||
FALLOC_FL_INSERT_RANGE))
|
||||
|
@ -1719,7 +1805,40 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
|
|||
return -ENOTEMPTY;
|
||||
}
|
||||
|
||||
if (iflags & (F2FS_COMPR_FL | F2FS_NOCOMP_FL)) {
|
||||
if (!f2fs_sb_has_compression(F2FS_I_SB(inode)))
|
||||
return -EOPNOTSUPP;
|
||||
if ((iflags & F2FS_COMPR_FL) && (iflags & F2FS_NOCOMP_FL))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if ((iflags ^ fi->i_flags) & F2FS_COMPR_FL) {
|
||||
if (S_ISREG(inode->i_mode) &&
|
||||
(fi->i_flags & F2FS_COMPR_FL || i_size_read(inode) ||
|
||||
F2FS_HAS_BLOCKS(inode)))
|
||||
return -EINVAL;
|
||||
if (iflags & F2FS_NOCOMP_FL)
|
||||
return -EINVAL;
|
||||
if (iflags & F2FS_COMPR_FL) {
|
||||
int err = f2fs_convert_inline_inode(inode);
|
||||
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (!f2fs_may_compress(inode))
|
||||
return -EINVAL;
|
||||
|
||||
set_compress_context(inode);
|
||||
}
|
||||
}
|
||||
if ((iflags ^ fi->i_flags) & F2FS_NOCOMP_FL) {
|
||||
if (fi->i_flags & F2FS_COMPR_FL)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
fi->i_flags = iflags | (fi->i_flags & ~mask);
|
||||
f2fs_bug_on(F2FS_I_SB(inode), (fi->i_flags & F2FS_COMPR_FL) &&
|
||||
(fi->i_flags & F2FS_NOCOMP_FL));
|
||||
|
||||
if (fi->i_flags & F2FS_PROJINHERIT_FL)
|
||||
set_inode_flag(inode, FI_PROJ_INHERIT);
|
||||
|
@ -1745,11 +1864,13 @@ static const struct {
|
|||
u32 iflag;
|
||||
u32 fsflag;
|
||||
} f2fs_fsflags_map[] = {
|
||||
{ F2FS_COMPR_FL, FS_COMPR_FL },
|
||||
{ F2FS_SYNC_FL, FS_SYNC_FL },
|
||||
{ F2FS_IMMUTABLE_FL, FS_IMMUTABLE_FL },
|
||||
{ F2FS_APPEND_FL, FS_APPEND_FL },
|
||||
{ F2FS_NODUMP_FL, FS_NODUMP_FL },
|
||||
{ F2FS_NOATIME_FL, FS_NOATIME_FL },
|
||||
{ F2FS_NOCOMP_FL, FS_NOCOMP_FL },
|
||||
{ F2FS_INDEX_FL, FS_INDEX_FL },
|
||||
{ F2FS_DIRSYNC_FL, FS_DIRSYNC_FL },
|
||||
{ F2FS_PROJINHERIT_FL, FS_PROJINHERIT_FL },
|
||||
|
@ -1757,11 +1878,13 @@ static const struct {
|
|||
};
|
||||
|
||||
#define F2FS_GETTABLE_FS_FL ( \
|
||||
FS_COMPR_FL | \
|
||||
FS_SYNC_FL | \
|
||||
FS_IMMUTABLE_FL | \
|
||||
FS_APPEND_FL | \
|
||||
FS_NODUMP_FL | \
|
||||
FS_NOATIME_FL | \
|
||||
FS_NOCOMP_FL | \
|
||||
FS_INDEX_FL | \
|
||||
FS_DIRSYNC_FL | \
|
||||
FS_PROJINHERIT_FL | \
|
||||
|
@ -1772,11 +1895,13 @@ static const struct {
|
|||
FS_CASEFOLD_FL)
|
||||
|
||||
#define F2FS_SETTABLE_FS_FL ( \
|
||||
FS_COMPR_FL | \
|
||||
FS_SYNC_FL | \
|
||||
FS_IMMUTABLE_FL | \
|
||||
FS_APPEND_FL | \
|
||||
FS_NODUMP_FL | \
|
||||
FS_NOATIME_FL | \
|
||||
FS_NOCOMP_FL | \
|
||||
FS_DIRSYNC_FL | \
|
||||
FS_PROJINHERIT_FL | \
|
||||
FS_CASEFOLD_FL)
|
||||
|
@ -1897,6 +2022,8 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
|
|||
|
||||
inode_lock(inode);
|
||||
|
||||
f2fs_disable_compressed_file(inode);
|
||||
|
||||
if (f2fs_is_atomic_file(inode)) {
|
||||
if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST))
|
||||
ret = -EINVAL;
|
||||
|
@ -1935,7 +2062,6 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
|
|||
|
||||
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
|
||||
F2FS_I(inode)->inmem_task = current;
|
||||
stat_inc_atomic_write(inode);
|
||||
stat_update_max_atomic_write(inode);
|
||||
out:
|
||||
inode_unlock(inode);
|
||||
|
@ -2324,12 +2450,12 @@ static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
|
|||
return ret;
|
||||
|
||||
if (!sync) {
|
||||
if (!mutex_trylock(&sbi->gc_mutex)) {
|
||||
if (!down_write_trylock(&sbi->gc_lock)) {
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
} else {
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
}
|
||||
|
||||
ret = f2fs_gc(sbi, sync, true, NULL_SEGNO);
|
||||
|
@ -2367,12 +2493,12 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
|
|||
|
||||
do_more:
|
||||
if (!range.sync) {
|
||||
if (!mutex_trylock(&sbi->gc_mutex)) {
|
||||
if (!down_write_trylock(&sbi->gc_lock)) {
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
} else {
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
}
|
||||
|
||||
ret = f2fs_gc(sbi, range.sync, true, GET_SEGNO(sbi, range.start));
|
||||
|
@ -2803,7 +2929,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
|
|||
end_segno = min(start_segno + range.segments, dev_end_segno);
|
||||
|
||||
while (start_segno < end_segno) {
|
||||
if (!mutex_trylock(&sbi->gc_mutex)) {
|
||||
if (!down_write_trylock(&sbi->gc_lock)) {
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
@ -3098,10 +3224,16 @@ static int f2fs_ioc_set_pin_file(struct file *filp, unsigned long arg)
|
|||
ret = -EAGAIN;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = f2fs_convert_inline_inode(inode);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (f2fs_disable_compressed_file(inode)) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto out;
|
||||
}
|
||||
|
||||
set_inode_flag(inode, FI_PIN_FILE);
|
||||
ret = F2FS_I(inode)->i_gc_failures[GC_FAILURE_PIN];
|
||||
done:
|
||||
|
@ -3350,6 +3482,17 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
|||
}
|
||||
}
|
||||
|
||||
static ssize_t f2fs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
|
||||
{
|
||||
struct file *file = iocb->ki_filp;
|
||||
struct inode *inode = file_inode(file);
|
||||
|
||||
if (!f2fs_is_compress_backend_ready(inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return generic_file_read_iter(iocb, iter);
|
||||
}
|
||||
|
||||
static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
||||
{
|
||||
struct file *file = iocb->ki_filp;
|
||||
|
@ -3361,6 +3504,9 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (!f2fs_is_compress_backend_ready(inode))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (iocb->ki_flags & IOCB_NOWAIT) {
|
||||
if (!inode_trylock(inode)) {
|
||||
ret = -EAGAIN;
|
||||
|
@ -3389,18 +3535,41 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
|
|||
ret = -EAGAIN;
|
||||
goto out;
|
||||
}
|
||||
} else {
|
||||
preallocated = true;
|
||||
target_size = iocb->ki_pos + iov_iter_count(from);
|
||||
|
||||
err = f2fs_preallocate_blocks(iocb, from);
|
||||
if (err) {
|
||||
clear_inode_flag(inode, FI_NO_PREALLOC);
|
||||
inode_unlock(inode);
|
||||
ret = err;
|
||||
goto out;
|
||||
}
|
||||
goto write;
|
||||
}
|
||||
|
||||
if (is_inode_flag_set(inode, FI_NO_PREALLOC))
|
||||
goto write;
|
||||
|
||||
if (iocb->ki_flags & IOCB_DIRECT) {
|
||||
/*
|
||||
* Convert inline data for Direct I/O before entering
|
||||
* f2fs_direct_IO().
|
||||
*/
|
||||
err = f2fs_convert_inline_inode(inode);
|
||||
if (err)
|
||||
goto out_err;
|
||||
/*
|
||||
* If force_buffere_io() is true, we have to allocate
|
||||
* blocks all the time, since f2fs_direct_IO will fall
|
||||
* back to buffered IO.
|
||||
*/
|
||||
if (!f2fs_force_buffered_io(inode, iocb, from) &&
|
||||
allow_outplace_dio(inode, iocb, from))
|
||||
goto write;
|
||||
}
|
||||
preallocated = true;
|
||||
target_size = iocb->ki_pos + iov_iter_count(from);
|
||||
|
||||
err = f2fs_preallocate_blocks(iocb, from);
|
||||
if (err) {
|
||||
out_err:
|
||||
clear_inode_flag(inode, FI_NO_PREALLOC);
|
||||
inode_unlock(inode);
|
||||
ret = err;
|
||||
goto out;
|
||||
}
|
||||
write:
|
||||
ret = __generic_file_write_iter(iocb, from);
|
||||
clear_inode_flag(inode, FI_NO_PREALLOC);
|
||||
|
||||
|
@ -3475,7 +3644,7 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|||
|
||||
const struct file_operations f2fs_file_operations = {
|
||||
.llseek = f2fs_llseek,
|
||||
.read_iter = generic_file_read_iter,
|
||||
.read_iter = f2fs_file_read_iter,
|
||||
.write_iter = f2fs_file_write_iter,
|
||||
.open = f2fs_file_open,
|
||||
.release = f2fs_release_file,
|
||||
|
|
18
fs/f2fs/gc.c
18
fs/f2fs/gc.c
|
@ -78,18 +78,18 @@ static int gc_thread_func(void *data)
|
|||
*/
|
||||
if (sbi->gc_mode == GC_URGENT) {
|
||||
wait_ms = gc_th->urgent_sleep_time;
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
goto do_gc;
|
||||
}
|
||||
|
||||
if (!mutex_trylock(&sbi->gc_mutex)) {
|
||||
if (!down_write_trylock(&sbi->gc_lock)) {
|
||||
stat_other_skip_bggc_count(sbi);
|
||||
goto next;
|
||||
}
|
||||
|
||||
if (!is_idle(sbi, GC_TIME)) {
|
||||
increase_sleep_time(gc_th, &wait_ms);
|
||||
mutex_unlock(&sbi->gc_mutex);
|
||||
up_write(&sbi->gc_lock);
|
||||
stat_io_skip_bggc_count(sbi);
|
||||
goto next;
|
||||
}
|
||||
|
@ -99,7 +99,7 @@ static int gc_thread_func(void *data)
|
|||
else
|
||||
increase_sleep_time(gc_th, &wait_ms);
|
||||
do_gc:
|
||||
stat_inc_bggc_count(sbi);
|
||||
stat_inc_bggc_count(sbi->stat_info);
|
||||
|
||||
/* if return value is not zero, no victim was selected */
|
||||
if (f2fs_gc(sbi, test_opt(sbi, FORCE_FG_GC), true, NULL_SEGNO))
|
||||
|
@ -1049,8 +1049,10 @@ next_step:
|
|||
|
||||
if (phase == 3) {
|
||||
inode = f2fs_iget(sb, dni.ino);
|
||||
if (IS_ERR(inode) || is_bad_inode(inode))
|
||||
if (IS_ERR(inode) || is_bad_inode(inode)) {
|
||||
set_sbi_flag(sbi, SBI_NEED_FSCK);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!down_write_trylock(
|
||||
&F2FS_I(inode)->i_gc_rwsem[WRITE])) {
|
||||
|
@ -1368,7 +1370,7 @@ stop:
|
|||
reserved_segments(sbi),
|
||||
prefree_segments(sbi));
|
||||
|
||||
mutex_unlock(&sbi->gc_mutex);
|
||||
up_write(&sbi->gc_lock);
|
||||
|
||||
put_gc_inode(&gc_list);
|
||||
|
||||
|
@ -1407,9 +1409,9 @@ static int free_segment_range(struct f2fs_sb_info *sbi, unsigned int start,
|
|||
.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
|
||||
};
|
||||
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
do_garbage_collect(sbi, segno, &gc_list, FG_GC);
|
||||
mutex_unlock(&sbi->gc_mutex);
|
||||
up_write(&sbi->gc_lock);
|
||||
put_gc_inode(&gc_list);
|
||||
|
||||
if (get_valid_blocks(sbi, segno, true))
|
||||
|
|
|
@ -368,7 +368,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
|
|||
struct f2fs_dentry_ptr src, dst;
|
||||
int err;
|
||||
|
||||
page = f2fs_grab_cache_page(dir->i_mapping, 0, false);
|
||||
page = f2fs_grab_cache_page(dir->i_mapping, 0, true);
|
||||
if (!page) {
|
||||
f2fs_put_page(ipage, 1);
|
||||
return -ENOMEM;
|
||||
|
@ -530,7 +530,7 @@ recover:
|
|||
return err;
|
||||
}
|
||||
|
||||
static int f2fs_convert_inline_dir(struct inode *dir, struct page *ipage,
|
||||
static int do_convert_inline_dir(struct inode *dir, struct page *ipage,
|
||||
void *inline_dentry)
|
||||
{
|
||||
if (!F2FS_I(dir)->i_dir_level)
|
||||
|
@ -539,6 +539,44 @@ static int f2fs_convert_inline_dir(struct inode *dir, struct page *ipage,
|
|||
return f2fs_move_rehashed_dirents(dir, ipage, inline_dentry);
|
||||
}
|
||||
|
||||
int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
|
||||
struct page *ipage;
|
||||
struct fscrypt_name fname;
|
||||
void *inline_dentry = NULL;
|
||||
int err = 0;
|
||||
|
||||
if (!f2fs_has_inline_dentry(dir))
|
||||
return 0;
|
||||
|
||||
f2fs_lock_op(sbi);
|
||||
|
||||
err = fscrypt_setup_filename(dir, &dentry->d_name, 0, &fname);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
ipage = f2fs_get_node_page(sbi, dir->i_ino);
|
||||
if (IS_ERR(ipage)) {
|
||||
err = PTR_ERR(ipage);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (f2fs_has_enough_room(dir, ipage, &fname)) {
|
||||
f2fs_put_page(ipage, 1);
|
||||
goto out;
|
||||
}
|
||||
|
||||
inline_dentry = inline_data_addr(dir, ipage);
|
||||
|
||||
err = do_convert_inline_dir(dir, ipage, inline_dentry);
|
||||
if (!err)
|
||||
f2fs_put_page(ipage, 1);
|
||||
out:
|
||||
f2fs_unlock_op(sbi);
|
||||
return err;
|
||||
}
|
||||
|
||||
int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
|
||||
const struct qstr *orig_name,
|
||||
struct inode *inode, nid_t ino, umode_t mode)
|
||||
|
@ -562,7 +600,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
|
|||
|
||||
bit_pos = f2fs_room_for_filename(d.bitmap, slots, d.max);
|
||||
if (bit_pos >= d.max) {
|
||||
err = f2fs_convert_inline_dir(dir, ipage, inline_dentry);
|
||||
err = do_convert_inline_dir(dir, ipage, inline_dentry);
|
||||
if (err)
|
||||
return err;
|
||||
err = -EAGAIN;
|
||||
|
|
|
@ -200,6 +200,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
|
|||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
|
||||
struct f2fs_inode_info *fi = F2FS_I(inode);
|
||||
struct f2fs_inode *ri = F2FS_INODE(node_page);
|
||||
unsigned long long iblocks;
|
||||
|
||||
iblocks = le64_to_cpu(F2FS_INODE(node_page)->i_blocks);
|
||||
|
@ -286,6 +287,19 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
|
|||
return false;
|
||||
}
|
||||
|
||||
if (f2fs_has_extra_attr(inode) && f2fs_sb_has_compression(sbi) &&
|
||||
fi->i_flags & F2FS_COMPR_FL &&
|
||||
F2FS_FITS_IN_INODE(ri, fi->i_extra_isize,
|
||||
i_log_cluster_size)) {
|
||||
if (ri->i_compress_algorithm >= COMPRESS_MAX)
|
||||
return false;
|
||||
if (le64_to_cpu(ri->i_compr_blocks) > inode->i_blocks)
|
||||
return false;
|
||||
if (ri->i_log_cluster_size < MIN_COMPRESS_LOG_SIZE ||
|
||||
ri->i_log_cluster_size > MAX_COMPRESS_LOG_SIZE)
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -407,6 +421,18 @@ static int do_read_inode(struct inode *inode)
|
|||
fi->i_crtime.tv_nsec = le32_to_cpu(ri->i_crtime_nsec);
|
||||
}
|
||||
|
||||
if (f2fs_has_extra_attr(inode) && f2fs_sb_has_compression(sbi) &&
|
||||
(fi->i_flags & F2FS_COMPR_FL)) {
|
||||
if (F2FS_FITS_IN_INODE(ri, fi->i_extra_isize,
|
||||
i_log_cluster_size)) {
|
||||
fi->i_compr_blocks = le64_to_cpu(ri->i_compr_blocks);
|
||||
fi->i_compress_algorithm = ri->i_compress_algorithm;
|
||||
fi->i_log_cluster_size = ri->i_log_cluster_size;
|
||||
fi->i_cluster_size = 1 << fi->i_log_cluster_size;
|
||||
set_inode_flag(inode, FI_COMPRESSED_FILE);
|
||||
}
|
||||
}
|
||||
|
||||
F2FS_I(inode)->i_disk_time[0] = inode->i_atime;
|
||||
F2FS_I(inode)->i_disk_time[1] = inode->i_ctime;
|
||||
F2FS_I(inode)->i_disk_time[2] = inode->i_mtime;
|
||||
|
@ -416,6 +442,8 @@ static int do_read_inode(struct inode *inode)
|
|||
stat_inc_inline_xattr(inode);
|
||||
stat_inc_inline_inode(inode);
|
||||
stat_inc_inline_dir(inode);
|
||||
stat_inc_compr_inode(inode);
|
||||
stat_add_compr_blocks(inode, F2FS_I(inode)->i_compr_blocks);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -569,6 +597,17 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page)
|
|||
ri->i_crtime_nsec =
|
||||
cpu_to_le32(F2FS_I(inode)->i_crtime.tv_nsec);
|
||||
}
|
||||
|
||||
if (f2fs_sb_has_compression(F2FS_I_SB(inode)) &&
|
||||
F2FS_FITS_IN_INODE(ri, F2FS_I(inode)->i_extra_isize,
|
||||
i_log_cluster_size)) {
|
||||
ri->i_compr_blocks =
|
||||
cpu_to_le64(F2FS_I(inode)->i_compr_blocks);
|
||||
ri->i_compress_algorithm =
|
||||
F2FS_I(inode)->i_compress_algorithm;
|
||||
ri->i_log_cluster_size =
|
||||
F2FS_I(inode)->i_log_cluster_size;
|
||||
}
|
||||
}
|
||||
|
||||
__set_inode_rdev(inode, ri);
|
||||
|
@ -711,6 +750,8 @@ no_delete:
|
|||
stat_dec_inline_xattr(inode);
|
||||
stat_dec_inline_dir(inode);
|
||||
stat_dec_inline_inode(inode);
|
||||
stat_dec_compr_inode(inode);
|
||||
stat_sub_compr_blocks(inode, F2FS_I(inode)->i_compr_blocks);
|
||||
|
||||
if (likely(!f2fs_cp_error(sbi) &&
|
||||
!is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
|
||||
|
|
120
fs/f2fs/namei.c
120
fs/f2fs/namei.c
|
@ -119,6 +119,13 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
|
|||
if (F2FS_I(inode)->i_flags & F2FS_PROJINHERIT_FL)
|
||||
set_inode_flag(inode, FI_PROJ_INHERIT);
|
||||
|
||||
if (f2fs_sb_has_compression(sbi)) {
|
||||
/* Inherit the compression flag in directory */
|
||||
if ((F2FS_I(dir)->i_flags & F2FS_COMPR_FL) &&
|
||||
f2fs_may_compress(inode))
|
||||
set_compress_context(inode);
|
||||
}
|
||||
|
||||
f2fs_set_inode_flags(inode);
|
||||
|
||||
trace_f2fs_new_inode(inode, 0);
|
||||
|
@ -149,6 +156,9 @@ static inline int is_extension_exist(const unsigned char *s, const char *sub)
|
|||
size_t sublen = strlen(sub);
|
||||
int i;
|
||||
|
||||
if (sublen == 1 && *sub == '*')
|
||||
return 1;
|
||||
|
||||
/*
|
||||
* filename format of multimedia file should be defined as:
|
||||
* "filename + '.' + extension + (optional: '.' + temp extension)".
|
||||
|
@ -262,6 +272,45 @@ int f2fs_update_extension_list(struct f2fs_sb_info *sbi, const char *name,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode,
|
||||
const unsigned char *name)
|
||||
{
|
||||
__u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
|
||||
unsigned char (*ext)[F2FS_EXTENSION_LEN];
|
||||
unsigned int ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt;
|
||||
int i, cold_count, hot_count;
|
||||
|
||||
if (!f2fs_sb_has_compression(sbi) ||
|
||||
is_inode_flag_set(inode, FI_COMPRESSED_FILE) ||
|
||||
F2FS_I(inode)->i_flags & F2FS_NOCOMP_FL ||
|
||||
!f2fs_may_compress(inode))
|
||||
return;
|
||||
|
||||
down_read(&sbi->sb_lock);
|
||||
|
||||
cold_count = le32_to_cpu(sbi->raw_super->extension_count);
|
||||
hot_count = sbi->raw_super->hot_ext_count;
|
||||
|
||||
for (i = cold_count; i < cold_count + hot_count; i++) {
|
||||
if (is_extension_exist(name, extlist[i])) {
|
||||
up_read(&sbi->sb_lock);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
up_read(&sbi->sb_lock);
|
||||
|
||||
ext = F2FS_OPTION(sbi).extensions;
|
||||
|
||||
for (i = 0; i < ext_cnt; i++) {
|
||||
if (!is_extension_exist(name, ext[i]))
|
||||
continue;
|
||||
|
||||
set_compress_context(inode);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
|
||||
bool excl)
|
||||
{
|
||||
|
@ -286,6 +335,8 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
|
|||
if (!test_opt(sbi, DISABLE_EXT_IDENTIFY))
|
||||
set_file_temperature(sbi, inode, dentry->d_name.name);
|
||||
|
||||
set_compress_inode(sbi, inode, dentry->d_name.name);
|
||||
|
||||
inode->i_op = &f2fs_file_inode_operations;
|
||||
inode->i_fop = &f2fs_file_operations;
|
||||
inode->i_mapping->a_ops = &f2fs_dblock_aops;
|
||||
|
@ -797,6 +848,7 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
|
|||
|
||||
if (whiteout) {
|
||||
f2fs_i_links_write(inode, false);
|
||||
inode->i_state |= I_LINKABLE;
|
||||
*whiteout = inode;
|
||||
} else {
|
||||
d_tmpfile(dentry, inode);
|
||||
|
@ -849,12 +901,11 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
struct inode *old_inode = d_inode(old_dentry);
|
||||
struct inode *new_inode = d_inode(new_dentry);
|
||||
struct inode *whiteout = NULL;
|
||||
struct page *old_dir_page;
|
||||
struct page *old_dir_page = NULL;
|
||||
struct page *old_page, *new_page = NULL;
|
||||
struct f2fs_dir_entry *old_dir_entry = NULL;
|
||||
struct f2fs_dir_entry *old_entry;
|
||||
struct f2fs_dir_entry *new_entry;
|
||||
bool is_old_inline = f2fs_has_inline_dentry(old_dir);
|
||||
int err;
|
||||
|
||||
if (unlikely(f2fs_cp_error(sbi)))
|
||||
|
@ -867,6 +918,26 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
F2FS_I(old_dentry->d_inode)->i_projid)))
|
||||
return -EXDEV;
|
||||
|
||||
/*
|
||||
* If new_inode is null, the below renaming flow will
|
||||
* add a link in old_dir which can conver inline_dir.
|
||||
* After then, if we failed to get the entry due to other
|
||||
* reasons like ENOMEM, we had to remove the new entry.
|
||||
* Instead of adding such the error handling routine, let's
|
||||
* simply convert first here.
|
||||
*/
|
||||
if (old_dir == new_dir && !new_inode) {
|
||||
err = f2fs_try_convert_inline_dir(old_dir, new_dentry);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
if (flags & RENAME_WHITEOUT) {
|
||||
err = f2fs_create_whiteout(old_dir, &whiteout);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
err = dquot_initialize(old_dir);
|
||||
if (err)
|
||||
goto out;
|
||||
|
@ -898,17 +969,11 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
}
|
||||
}
|
||||
|
||||
if (flags & RENAME_WHITEOUT) {
|
||||
err = f2fs_create_whiteout(old_dir, &whiteout);
|
||||
if (err)
|
||||
goto out_dir;
|
||||
}
|
||||
|
||||
if (new_inode) {
|
||||
|
||||
err = -ENOTEMPTY;
|
||||
if (old_dir_entry && !f2fs_empty_dir(new_inode))
|
||||
goto out_whiteout;
|
||||
goto out_dir;
|
||||
|
||||
err = -ENOENT;
|
||||
new_entry = f2fs_find_entry(new_dir, &new_dentry->d_name,
|
||||
|
@ -916,7 +981,7 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
if (!new_entry) {
|
||||
if (IS_ERR(new_page))
|
||||
err = PTR_ERR(new_page);
|
||||
goto out_whiteout;
|
||||
goto out_dir;
|
||||
}
|
||||
|
||||
f2fs_balance_fs(sbi, true);
|
||||
|
@ -928,6 +993,7 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
goto put_out_dir;
|
||||
|
||||
f2fs_set_link(new_dir, new_entry, new_page, old_inode);
|
||||
new_page = NULL;
|
||||
|
||||
new_inode->i_ctime = current_time(new_inode);
|
||||
down_write(&F2FS_I(new_inode)->i_sem);
|
||||
|
@ -948,33 +1014,11 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
err = f2fs_add_link(new_dentry, old_inode);
|
||||
if (err) {
|
||||
f2fs_unlock_op(sbi);
|
||||
goto out_whiteout;
|
||||
goto out_dir;
|
||||
}
|
||||
|
||||
if (old_dir_entry)
|
||||
f2fs_i_links_write(new_dir, true);
|
||||
|
||||
/*
|
||||
* old entry and new entry can locate in the same inline
|
||||
* dentry in inode, when attaching new entry in inline dentry,
|
||||
* it could force inline dentry conversion, after that,
|
||||
* old_entry and old_page will point to wrong address, in
|
||||
* order to avoid this, let's do the check and update here.
|
||||
*/
|
||||
if (is_old_inline && !f2fs_has_inline_dentry(old_dir)) {
|
||||
f2fs_put_page(old_page, 0);
|
||||
old_page = NULL;
|
||||
|
||||
old_entry = f2fs_find_entry(old_dir,
|
||||
&old_dentry->d_name, &old_page);
|
||||
if (!old_entry) {
|
||||
err = -ENOENT;
|
||||
if (IS_ERR(old_page))
|
||||
err = PTR_ERR(old_page);
|
||||
f2fs_unlock_op(sbi);
|
||||
goto out_whiteout;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
down_write(&F2FS_I(old_inode)->i_sem);
|
||||
|
@ -989,9 +1033,9 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
f2fs_mark_inode_dirty_sync(old_inode, false);
|
||||
|
||||
f2fs_delete_entry(old_entry, old_page, old_dir, NULL);
|
||||
old_page = NULL;
|
||||
|
||||
if (whiteout) {
|
||||
whiteout->i_state |= I_LINKABLE;
|
||||
set_inode_flag(whiteout, FI_INC_LINK);
|
||||
err = f2fs_add_link(old_dentry, whiteout);
|
||||
if (err)
|
||||
|
@ -1025,17 +1069,15 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
|||
|
||||
put_out_dir:
|
||||
f2fs_unlock_op(sbi);
|
||||
if (new_page)
|
||||
f2fs_put_page(new_page, 0);
|
||||
out_whiteout:
|
||||
if (whiteout)
|
||||
iput(whiteout);
|
||||
f2fs_put_page(new_page, 0);
|
||||
out_dir:
|
||||
if (old_dir_entry)
|
||||
f2fs_put_page(old_dir_page, 0);
|
||||
out_old:
|
||||
f2fs_put_page(old_page, 0);
|
||||
out:
|
||||
if (whiteout)
|
||||
iput(whiteout);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -723,6 +723,7 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
|
|||
int ret = 0;
|
||||
unsigned long s_flags = sbi->sb->s_flags;
|
||||
bool need_writecp = false;
|
||||
bool fix_curseg_write_pointer = false;
|
||||
#ifdef CONFIG_QUOTA
|
||||
int quota_enabled;
|
||||
#endif
|
||||
|
@ -774,6 +775,8 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
|
|||
sbi->sb->s_flags = s_flags;
|
||||
}
|
||||
skip:
|
||||
fix_curseg_write_pointer = !check_only || list_empty(&inode_list);
|
||||
|
||||
destroy_fsync_dnodes(&inode_list, err);
|
||||
destroy_fsync_dnodes(&tmp_inode_list, err);
|
||||
|
||||
|
@ -784,9 +787,22 @@ skip:
|
|||
if (err) {
|
||||
truncate_inode_pages_final(NODE_MAPPING(sbi));
|
||||
truncate_inode_pages_final(META_MAPPING(sbi));
|
||||
} else {
|
||||
clear_sbi_flag(sbi, SBI_POR_DOING);
|
||||
}
|
||||
|
||||
/*
|
||||
* If fsync data succeeds or there is no fsync data to recover,
|
||||
* and the f2fs is not read only, check and fix zoned block devices'
|
||||
* write pointer consistency.
|
||||
*/
|
||||
if (!err && fix_curseg_write_pointer && !f2fs_readonly(sbi->sb) &&
|
||||
f2fs_sb_has_blkzoned(sbi)) {
|
||||
err = f2fs_fix_curseg_write_pointer(sbi);
|
||||
ret = err;
|
||||
}
|
||||
|
||||
if (!err)
|
||||
clear_sbi_flag(sbi, SBI_POR_DOING);
|
||||
|
||||
mutex_unlock(&sbi->cp_mutex);
|
||||
|
||||
/* let's drop all the directory inodes for clean checkpoint */
|
||||
|
|
|
@ -334,7 +334,6 @@ void f2fs_drop_inmem_pages(struct inode *inode)
|
|||
}
|
||||
|
||||
fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
|
||||
stat_dec_atomic_write(inode);
|
||||
|
||||
spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
|
||||
if (!list_empty(&fi->inmem_ilist))
|
||||
|
@ -505,7 +504,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
|
|||
* dir/node pages without enough free segments.
|
||||
*/
|
||||
if (has_not_enough_free_secs(sbi, 0, 0)) {
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
f2fs_gc(sbi, false, false, NULL_SEGNO);
|
||||
}
|
||||
}
|
||||
|
@ -2225,7 +2224,7 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
|
|||
struct sit_info *sit_i = SIT_I(sbi);
|
||||
|
||||
f2fs_bug_on(sbi, addr == NULL_ADDR);
|
||||
if (addr == NEW_ADDR)
|
||||
if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
|
||||
return;
|
||||
|
||||
invalidate_mapping_pages(META_MAPPING(sbi), addr, addr);
|
||||
|
@ -2861,9 +2860,9 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
|
|||
if (sbi->discard_blks == 0)
|
||||
goto out;
|
||||
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
err = f2fs_write_checkpoint(sbi, &cpc);
|
||||
mutex_unlock(&sbi->gc_mutex);
|
||||
up_write(&sbi->gc_lock);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
|
@ -3036,7 +3035,8 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
|
|||
if (fio->type == DATA) {
|
||||
struct inode *inode = fio->page->mapping->host;
|
||||
|
||||
if (is_cold_data(fio->page) || file_is_cold(inode))
|
||||
if (is_cold_data(fio->page) || file_is_cold(inode) ||
|
||||
f2fs_compressed_file(inode))
|
||||
return CURSEG_COLD_DATA;
|
||||
if (file_is_hot(inode) ||
|
||||
is_inode_flag_set(inode, FI_HOT_DATA) ||
|
||||
|
@ -3289,7 +3289,7 @@ int f2fs_inplace_write_data(struct f2fs_io_info *fio)
|
|||
|
||||
stat_inc_inplace_blocks(fio->sbi);
|
||||
|
||||
if (fio->bio)
|
||||
if (fio->bio && !(SM_I(sbi)->ipu_policy & (1 << F2FS_IPU_NOCACHE)))
|
||||
err = f2fs_merge_page_bio(fio);
|
||||
else
|
||||
err = f2fs_submit_page_bio(fio);
|
||||
|
@ -4368,6 +4368,263 @@ out:
|
|||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_BLK_DEV_ZONED
|
||||
|
||||
static int check_zone_write_pointer(struct f2fs_sb_info *sbi,
|
||||
struct f2fs_dev_info *fdev,
|
||||
struct blk_zone *zone)
|
||||
{
|
||||
unsigned int wp_segno, wp_blkoff, zone_secno, zone_segno, segno;
|
||||
block_t zone_block, wp_block, last_valid_block;
|
||||
unsigned int log_sectors_per_block = sbi->log_blocksize - SECTOR_SHIFT;
|
||||
int i, s, b, ret;
|
||||
struct seg_entry *se;
|
||||
|
||||
if (zone->type != BLK_ZONE_TYPE_SEQWRITE_REQ)
|
||||
return 0;
|
||||
|
||||
wp_block = fdev->start_blk + (zone->wp >> log_sectors_per_block);
|
||||
wp_segno = GET_SEGNO(sbi, wp_block);
|
||||
wp_blkoff = wp_block - START_BLOCK(sbi, wp_segno);
|
||||
zone_block = fdev->start_blk + (zone->start >> log_sectors_per_block);
|
||||
zone_segno = GET_SEGNO(sbi, zone_block);
|
||||
zone_secno = GET_SEC_FROM_SEG(sbi, zone_segno);
|
||||
|
||||
if (zone_segno >= MAIN_SEGS(sbi))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Skip check of zones cursegs point to, since
|
||||
* fix_curseg_write_pointer() checks them.
|
||||
*/
|
||||
for (i = 0; i < NO_CHECK_TYPE; i++)
|
||||
if (zone_secno == GET_SEC_FROM_SEG(sbi,
|
||||
CURSEG_I(sbi, i)->segno))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Get last valid block of the zone.
|
||||
*/
|
||||
last_valid_block = zone_block - 1;
|
||||
for (s = sbi->segs_per_sec - 1; s >= 0; s--) {
|
||||
segno = zone_segno + s;
|
||||
se = get_seg_entry(sbi, segno);
|
||||
for (b = sbi->blocks_per_seg - 1; b >= 0; b--)
|
||||
if (f2fs_test_bit(b, se->cur_valid_map)) {
|
||||
last_valid_block = START_BLOCK(sbi, segno) + b;
|
||||
break;
|
||||
}
|
||||
if (last_valid_block >= zone_block)
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* If last valid block is beyond the write pointer, report the
|
||||
* inconsistency. This inconsistency does not cause write error
|
||||
* because the zone will not be selected for write operation until
|
||||
* it get discarded. Just report it.
|
||||
*/
|
||||
if (last_valid_block >= wp_block) {
|
||||
f2fs_notice(sbi, "Valid block beyond write pointer: "
|
||||
"valid block[0x%x,0x%x] wp[0x%x,0x%x]",
|
||||
GET_SEGNO(sbi, last_valid_block),
|
||||
GET_BLKOFF_FROM_SEG0(sbi, last_valid_block),
|
||||
wp_segno, wp_blkoff);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* If there is no valid block in the zone and if write pointer is
|
||||
* not at zone start, reset the write pointer.
|
||||
*/
|
||||
if (last_valid_block + 1 == zone_block && zone->wp != zone->start) {
|
||||
f2fs_notice(sbi,
|
||||
"Zone without valid block has non-zero write "
|
||||
"pointer. Reset the write pointer: wp[0x%x,0x%x]",
|
||||
wp_segno, wp_blkoff);
|
||||
ret = __f2fs_issue_discard_zone(sbi, fdev->bdev, zone_block,
|
||||
zone->len >> log_sectors_per_block);
|
||||
if (ret) {
|
||||
f2fs_err(sbi, "Discard zone failed: %s (errno=%d)",
|
||||
fdev->path, ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct f2fs_dev_info *get_target_zoned_dev(struct f2fs_sb_info *sbi,
|
||||
block_t zone_blkaddr)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < sbi->s_ndevs; i++) {
|
||||
if (!bdev_is_zoned(FDEV(i).bdev))
|
||||
continue;
|
||||
if (sbi->s_ndevs == 1 || (FDEV(i).start_blk <= zone_blkaddr &&
|
||||
zone_blkaddr <= FDEV(i).end_blk))
|
||||
return &FDEV(i);
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int report_one_zone_cb(struct blk_zone *zone, unsigned int idx,
|
||||
void *data) {
|
||||
memcpy(data, zone, sizeof(struct blk_zone));
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fix_curseg_write_pointer(struct f2fs_sb_info *sbi, int type)
|
||||
{
|
||||
struct curseg_info *cs = CURSEG_I(sbi, type);
|
||||
struct f2fs_dev_info *zbd;
|
||||
struct blk_zone zone;
|
||||
unsigned int cs_section, wp_segno, wp_blkoff, wp_sector_off;
|
||||
block_t cs_zone_block, wp_block;
|
||||
unsigned int log_sectors_per_block = sbi->log_blocksize - SECTOR_SHIFT;
|
||||
sector_t zone_sector;
|
||||
int err;
|
||||
|
||||
cs_section = GET_SEC_FROM_SEG(sbi, cs->segno);
|
||||
cs_zone_block = START_BLOCK(sbi, GET_SEG_FROM_SEC(sbi, cs_section));
|
||||
|
||||
zbd = get_target_zoned_dev(sbi, cs_zone_block);
|
||||
if (!zbd)
|
||||
return 0;
|
||||
|
||||
/* report zone for the sector the curseg points to */
|
||||
zone_sector = (sector_t)(cs_zone_block - zbd->start_blk)
|
||||
<< log_sectors_per_block;
|
||||
err = blkdev_report_zones(zbd->bdev, zone_sector, 1,
|
||||
report_one_zone_cb, &zone);
|
||||
if (err != 1) {
|
||||
f2fs_err(sbi, "Report zone failed: %s errno=(%d)",
|
||||
zbd->path, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (zone.type != BLK_ZONE_TYPE_SEQWRITE_REQ)
|
||||
return 0;
|
||||
|
||||
wp_block = zbd->start_blk + (zone.wp >> log_sectors_per_block);
|
||||
wp_segno = GET_SEGNO(sbi, wp_block);
|
||||
wp_blkoff = wp_block - START_BLOCK(sbi, wp_segno);
|
||||
wp_sector_off = zone.wp & GENMASK(log_sectors_per_block - 1, 0);
|
||||
|
||||
if (cs->segno == wp_segno && cs->next_blkoff == wp_blkoff &&
|
||||
wp_sector_off == 0)
|
||||
return 0;
|
||||
|
||||
f2fs_notice(sbi, "Unaligned curseg[%d] with write pointer: "
|
||||
"curseg[0x%x,0x%x] wp[0x%x,0x%x]",
|
||||
type, cs->segno, cs->next_blkoff, wp_segno, wp_blkoff);
|
||||
|
||||
f2fs_notice(sbi, "Assign new section to curseg[%d]: "
|
||||
"curseg[0x%x,0x%x]", type, cs->segno, cs->next_blkoff);
|
||||
allocate_segment_by_default(sbi, type, true);
|
||||
|
||||
/* check consistency of the zone curseg pointed to */
|
||||
if (check_zone_write_pointer(sbi, zbd, &zone))
|
||||
return -EIO;
|
||||
|
||||
/* check newly assigned zone */
|
||||
cs_section = GET_SEC_FROM_SEG(sbi, cs->segno);
|
||||
cs_zone_block = START_BLOCK(sbi, GET_SEG_FROM_SEC(sbi, cs_section));
|
||||
|
||||
zbd = get_target_zoned_dev(sbi, cs_zone_block);
|
||||
if (!zbd)
|
||||
return 0;
|
||||
|
||||
zone_sector = (sector_t)(cs_zone_block - zbd->start_blk)
|
||||
<< log_sectors_per_block;
|
||||
err = blkdev_report_zones(zbd->bdev, zone_sector, 1,
|
||||
report_one_zone_cb, &zone);
|
||||
if (err != 1) {
|
||||
f2fs_err(sbi, "Report zone failed: %s errno=(%d)",
|
||||
zbd->path, err);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (zone.type != BLK_ZONE_TYPE_SEQWRITE_REQ)
|
||||
return 0;
|
||||
|
||||
if (zone.wp != zone.start) {
|
||||
f2fs_notice(sbi,
|
||||
"New zone for curseg[%d] is not yet discarded. "
|
||||
"Reset the zone: curseg[0x%x,0x%x]",
|
||||
type, cs->segno, cs->next_blkoff);
|
||||
err = __f2fs_issue_discard_zone(sbi, zbd->bdev,
|
||||
zone_sector >> log_sectors_per_block,
|
||||
zone.len >> log_sectors_per_block);
|
||||
if (err) {
|
||||
f2fs_err(sbi, "Discard zone failed: %s (errno=%d)",
|
||||
zbd->path, err);
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int f2fs_fix_curseg_write_pointer(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
int i, ret;
|
||||
|
||||
for (i = 0; i < NO_CHECK_TYPE; i++) {
|
||||
ret = fix_curseg_write_pointer(sbi, i);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct check_zone_write_pointer_args {
|
||||
struct f2fs_sb_info *sbi;
|
||||
struct f2fs_dev_info *fdev;
|
||||
};
|
||||
|
||||
static int check_zone_write_pointer_cb(struct blk_zone *zone, unsigned int idx,
|
||||
void *data) {
|
||||
struct check_zone_write_pointer_args *args;
|
||||
args = (struct check_zone_write_pointer_args *)data;
|
||||
|
||||
return check_zone_write_pointer(args->sbi, args->fdev, zone);
|
||||
}
|
||||
|
||||
int f2fs_check_write_pointer(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
int i, ret;
|
||||
struct check_zone_write_pointer_args args;
|
||||
|
||||
for (i = 0; i < sbi->s_ndevs; i++) {
|
||||
if (!bdev_is_zoned(FDEV(i).bdev))
|
||||
continue;
|
||||
|
||||
args.sbi = sbi;
|
||||
args.fdev = &FDEV(i);
|
||||
ret = blkdev_report_zones(FDEV(i).bdev, 0, BLK_ALL_ZONES,
|
||||
check_zone_write_pointer_cb, &args);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
#else
|
||||
int f2fs_fix_curseg_write_pointer(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
int f2fs_check_write_pointer(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Update min, max modified time for cost-benefit GC algorithm
|
||||
*/
|
||||
|
|
|
@ -200,18 +200,6 @@ struct segment_allocation {
|
|||
void (*allocate_segment)(struct f2fs_sb_info *, int, bool);
|
||||
};
|
||||
|
||||
/*
|
||||
* this value is set in page as a private data which indicate that
|
||||
* the page is atomically written, and it is in inmem_pages list.
|
||||
*/
|
||||
#define ATOMIC_WRITTEN_PAGE ((unsigned long)-1)
|
||||
#define DUMMY_WRITTEN_PAGE ((unsigned long)-2)
|
||||
|
||||
#define IS_ATOMIC_WRITTEN_PAGE(page) \
|
||||
(page_private(page) == (unsigned long)ATOMIC_WRITTEN_PAGE)
|
||||
#define IS_DUMMY_WRITTEN_PAGE(page) \
|
||||
(page_private(page) == (unsigned long)DUMMY_WRITTEN_PAGE)
|
||||
|
||||
#define MAX_SKIP_GC_COUNT 16
|
||||
|
||||
struct inmem_pages {
|
||||
|
@ -619,8 +607,10 @@ static inline int utilization(struct f2fs_sb_info *sbi)
|
|||
* threashold,
|
||||
* F2FS_IPU_FSYNC - activated in fsync path only for high performance flash
|
||||
* storages. IPU will be triggered only if the # of dirty
|
||||
* pages over min_fsync_blocks.
|
||||
* F2FS_IPUT_DISABLE - disable IPU. (=default option)
|
||||
* pages over min_fsync_blocks. (=default option)
|
||||
* F2FS_IPU_ASYNC - do IPU given by asynchronous write requests.
|
||||
* F2FS_IPU_NOCACHE - disable IPU bio cache.
|
||||
* F2FS_IPUT_DISABLE - disable IPU. (=default option in LFS mode)
|
||||
*/
|
||||
#define DEF_MIN_IPU_UTIL 70
|
||||
#define DEF_MIN_FSYNC_BLOCKS 8
|
||||
|
@ -635,6 +625,7 @@ enum {
|
|||
F2FS_IPU_SSR_UTIL,
|
||||
F2FS_IPU_FSYNC,
|
||||
F2FS_IPU_ASYNC,
|
||||
F2FS_IPU_NOCACHE,
|
||||
};
|
||||
|
||||
static inline unsigned int curseg_segno(struct f2fs_sb_info *sbi,
|
||||
|
|
182
fs/f2fs/super.c
182
fs/f2fs/super.c
|
@ -141,6 +141,9 @@ enum {
|
|||
Opt_checkpoint_disable_cap,
|
||||
Opt_checkpoint_disable_cap_perc,
|
||||
Opt_checkpoint_enable,
|
||||
Opt_compress_algorithm,
|
||||
Opt_compress_log_size,
|
||||
Opt_compress_extension,
|
||||
Opt_err,
|
||||
};
|
||||
|
||||
|
@ -203,6 +206,9 @@ static match_table_t f2fs_tokens = {
|
|||
{Opt_checkpoint_disable_cap, "checkpoint=disable:%u"},
|
||||
{Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"},
|
||||
{Opt_checkpoint_enable, "checkpoint=enable"},
|
||||
{Opt_compress_algorithm, "compress_algorithm=%s"},
|
||||
{Opt_compress_log_size, "compress_log_size=%u"},
|
||||
{Opt_compress_extension, "compress_extension=%s"},
|
||||
{Opt_err, NULL},
|
||||
};
|
||||
|
||||
|
@ -391,8 +397,9 @@ static int parse_options(struct super_block *sb, char *options)
|
|||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_SB(sb);
|
||||
substring_t args[MAX_OPT_ARGS];
|
||||
unsigned char (*ext)[F2FS_EXTENSION_LEN];
|
||||
char *p, *name;
|
||||
int arg = 0;
|
||||
int arg = 0, ext_cnt;
|
||||
kuid_t uid;
|
||||
kgid_t gid;
|
||||
#ifdef CONFIG_QUOTA
|
||||
|
@ -810,6 +817,66 @@ static int parse_options(struct super_block *sb, char *options)
|
|||
case Opt_checkpoint_enable:
|
||||
clear_opt(sbi, DISABLE_CHECKPOINT);
|
||||
break;
|
||||
case Opt_compress_algorithm:
|
||||
if (!f2fs_sb_has_compression(sbi)) {
|
||||
f2fs_err(sbi, "Compression feature if off");
|
||||
return -EINVAL;
|
||||
}
|
||||
name = match_strdup(&args[0]);
|
||||
if (!name)
|
||||
return -ENOMEM;
|
||||
if (strlen(name) == 3 && !strcmp(name, "lzo")) {
|
||||
F2FS_OPTION(sbi).compress_algorithm =
|
||||
COMPRESS_LZO;
|
||||
} else if (strlen(name) == 3 &&
|
||||
!strcmp(name, "lz4")) {
|
||||
F2FS_OPTION(sbi).compress_algorithm =
|
||||
COMPRESS_LZ4;
|
||||
} else {
|
||||
kfree(name);
|
||||
return -EINVAL;
|
||||
}
|
||||
kfree(name);
|
||||
break;
|
||||
case Opt_compress_log_size:
|
||||
if (!f2fs_sb_has_compression(sbi)) {
|
||||
f2fs_err(sbi, "Compression feature is off");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (args->from && match_int(args, &arg))
|
||||
return -EINVAL;
|
||||
if (arg < MIN_COMPRESS_LOG_SIZE ||
|
||||
arg > MAX_COMPRESS_LOG_SIZE) {
|
||||
f2fs_err(sbi,
|
||||
"Compress cluster log size is out of range");
|
||||
return -EINVAL;
|
||||
}
|
||||
F2FS_OPTION(sbi).compress_log_size = arg;
|
||||
break;
|
||||
case Opt_compress_extension:
|
||||
if (!f2fs_sb_has_compression(sbi)) {
|
||||
f2fs_err(sbi, "Compression feature is off");
|
||||
return -EINVAL;
|
||||
}
|
||||
name = match_strdup(&args[0]);
|
||||
if (!name)
|
||||
return -ENOMEM;
|
||||
|
||||
ext = F2FS_OPTION(sbi).extensions;
|
||||
ext_cnt = F2FS_OPTION(sbi).compress_ext_cnt;
|
||||
|
||||
if (strlen(name) >= F2FS_EXTENSION_LEN ||
|
||||
ext_cnt >= COMPRESS_EXT_NUM) {
|
||||
f2fs_err(sbi,
|
||||
"invalid extension length/number");
|
||||
kfree(name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
strcpy(ext[ext_cnt], name);
|
||||
F2FS_OPTION(sbi).compress_ext_cnt++;
|
||||
kfree(name);
|
||||
break;
|
||||
default:
|
||||
f2fs_err(sbi, "Unrecognized mount option \"%s\" or missing value",
|
||||
p);
|
||||
|
@ -1125,6 +1192,8 @@ static void f2fs_put_super(struct super_block *sb)
|
|||
f2fs_destroy_node_manager(sbi);
|
||||
f2fs_destroy_segment_manager(sbi);
|
||||
|
||||
f2fs_destroy_post_read_wq(sbi);
|
||||
|
||||
kvfree(sbi->ckpt);
|
||||
|
||||
f2fs_unregister_sysfs(sbi);
|
||||
|
@ -1169,9 +1238,9 @@ int f2fs_sync_fs(struct super_block *sb, int sync)
|
|||
|
||||
cpc.reason = __get_cp_reason(sbi);
|
||||
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
err = f2fs_write_checkpoint(sbi, &cpc);
|
||||
mutex_unlock(&sbi->gc_mutex);
|
||||
up_write(&sbi->gc_lock);
|
||||
}
|
||||
f2fs_trace_ios(NULL, 1);
|
||||
|
||||
|
@ -1213,12 +1282,10 @@ static int f2fs_statfs_project(struct super_block *sb,
|
|||
return PTR_ERR(dquot);
|
||||
spin_lock(&dquot->dq_dqb_lock);
|
||||
|
||||
limit = 0;
|
||||
if (dquot->dq_dqb.dqb_bsoftlimit)
|
||||
limit = dquot->dq_dqb.dqb_bsoftlimit;
|
||||
if (dquot->dq_dqb.dqb_bhardlimit &&
|
||||
(!limit || dquot->dq_dqb.dqb_bhardlimit < limit))
|
||||
limit = dquot->dq_dqb.dqb_bhardlimit;
|
||||
limit = min_not_zero(dquot->dq_dqb.dqb_bsoftlimit,
|
||||
dquot->dq_dqb.dqb_bhardlimit);
|
||||
if (limit)
|
||||
limit >>= sb->s_blocksize_bits;
|
||||
|
||||
if (limit && buf->f_blocks > limit) {
|
||||
curblock = dquot->dq_dqb.dqb_curspace >> sb->s_blocksize_bits;
|
||||
|
@ -1228,12 +1295,8 @@ static int f2fs_statfs_project(struct super_block *sb,
|
|||
(buf->f_blocks - curblock) : 0;
|
||||
}
|
||||
|
||||
limit = 0;
|
||||
if (dquot->dq_dqb.dqb_isoftlimit)
|
||||
limit = dquot->dq_dqb.dqb_isoftlimit;
|
||||
if (dquot->dq_dqb.dqb_ihardlimit &&
|
||||
(!limit || dquot->dq_dqb.dqb_ihardlimit < limit))
|
||||
limit = dquot->dq_dqb.dqb_ihardlimit;
|
||||
limit = min_not_zero(dquot->dq_dqb.dqb_isoftlimit,
|
||||
dquot->dq_dqb.dqb_ihardlimit);
|
||||
|
||||
if (limit && buf->f_files > limit) {
|
||||
buf->f_files = limit;
|
||||
|
@ -1340,6 +1403,35 @@ static inline void f2fs_show_quota_options(struct seq_file *seq,
|
|||
#endif
|
||||
}
|
||||
|
||||
static inline void f2fs_show_compress_options(struct seq_file *seq,
|
||||
struct super_block *sb)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_SB(sb);
|
||||
char *algtype = "";
|
||||
int i;
|
||||
|
||||
if (!f2fs_sb_has_compression(sbi))
|
||||
return;
|
||||
|
||||
switch (F2FS_OPTION(sbi).compress_algorithm) {
|
||||
case COMPRESS_LZO:
|
||||
algtype = "lzo";
|
||||
break;
|
||||
case COMPRESS_LZ4:
|
||||
algtype = "lz4";
|
||||
break;
|
||||
}
|
||||
seq_printf(seq, ",compress_algorithm=%s", algtype);
|
||||
|
||||
seq_printf(seq, ",compress_log_size=%u",
|
||||
F2FS_OPTION(sbi).compress_log_size);
|
||||
|
||||
for (i = 0; i < F2FS_OPTION(sbi).compress_ext_cnt; i++) {
|
||||
seq_printf(seq, ",compress_extension=%s",
|
||||
F2FS_OPTION(sbi).extensions[i]);
|
||||
}
|
||||
}
|
||||
|
||||
static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_SB(root->d_sb);
|
||||
|
@ -1462,6 +1554,8 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
|
|||
seq_printf(seq, ",fsync_mode=%s", "strict");
|
||||
else if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_NOBARRIER)
|
||||
seq_printf(seq, ",fsync_mode=%s", "nobarrier");
|
||||
|
||||
f2fs_show_compress_options(seq, sbi->sb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1476,6 +1570,9 @@ static void default_options(struct f2fs_sb_info *sbi)
|
|||
F2FS_OPTION(sbi).test_dummy_encryption = false;
|
||||
F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID);
|
||||
F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID);
|
||||
F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZO;
|
||||
F2FS_OPTION(sbi).compress_log_size = MIN_COMPRESS_LOG_SIZE;
|
||||
F2FS_OPTION(sbi).compress_ext_cnt = 0;
|
||||
|
||||
set_opt(sbi, BG_GC);
|
||||
set_opt(sbi, INLINE_XATTR);
|
||||
|
@ -1524,7 +1621,7 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
|
|||
f2fs_update_time(sbi, DISABLE_TIME);
|
||||
|
||||
while (!f2fs_time_over(sbi, DISABLE_TIME)) {
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
err = f2fs_gc(sbi, true, false, NULL_SEGNO);
|
||||
if (err == -ENODATA) {
|
||||
err = 0;
|
||||
|
@ -1546,7 +1643,7 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
|
|||
goto restore_flag;
|
||||
}
|
||||
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
cpc.reason = CP_PAUSE;
|
||||
set_sbi_flag(sbi, SBI_CP_DISABLED);
|
||||
err = f2fs_write_checkpoint(sbi, &cpc);
|
||||
|
@ -1558,7 +1655,7 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
|
|||
spin_unlock(&sbi->stat_lock);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&sbi->gc_mutex);
|
||||
up_write(&sbi->gc_lock);
|
||||
restore_flag:
|
||||
sbi->sb->s_flags = s_flags; /* Restore MS_RDONLY status */
|
||||
return err;
|
||||
|
@ -1566,12 +1663,12 @@ restore_flag:
|
|||
|
||||
static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi)
|
||||
{
|
||||
mutex_lock(&sbi->gc_mutex);
|
||||
down_write(&sbi->gc_lock);
|
||||
f2fs_dirty_to_prefree(sbi);
|
||||
|
||||
clear_sbi_flag(sbi, SBI_CP_DISABLED);
|
||||
set_sbi_flag(sbi, SBI_IS_DIRTY);
|
||||
mutex_unlock(&sbi->gc_mutex);
|
||||
up_write(&sbi->gc_lock);
|
||||
|
||||
f2fs_sync_fs(sbi->sb, 1);
|
||||
}
|
||||
|
@ -2158,7 +2255,7 @@ static int f2fs_dquot_commit(struct dquot *dquot)
|
|||
struct f2fs_sb_info *sbi = F2FS_SB(dquot->dq_sb);
|
||||
int ret;
|
||||
|
||||
down_read(&sbi->quota_sem);
|
||||
down_read_nested(&sbi->quota_sem, SINGLE_DEPTH_NESTING);
|
||||
ret = dquot_commit(dquot);
|
||||
if (ret < 0)
|
||||
set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
|
||||
|
@ -2182,13 +2279,10 @@ static int f2fs_dquot_acquire(struct dquot *dquot)
|
|||
static int f2fs_dquot_release(struct dquot *dquot)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_SB(dquot->dq_sb);
|
||||
int ret;
|
||||
int ret = dquot_release(dquot);
|
||||
|
||||
down_read(&sbi->quota_sem);
|
||||
ret = dquot_release(dquot);
|
||||
if (ret < 0)
|
||||
set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
|
||||
up_read(&sbi->quota_sem);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -2196,29 +2290,22 @@ static int f2fs_dquot_mark_dquot_dirty(struct dquot *dquot)
|
|||
{
|
||||
struct super_block *sb = dquot->dq_sb;
|
||||
struct f2fs_sb_info *sbi = F2FS_SB(sb);
|
||||
int ret;
|
||||
|
||||
down_read(&sbi->quota_sem);
|
||||
ret = dquot_mark_dquot_dirty(dquot);
|
||||
int ret = dquot_mark_dquot_dirty(dquot);
|
||||
|
||||
/* if we are using journalled quota */
|
||||
if (is_journalled_quota(sbi))
|
||||
set_sbi_flag(sbi, SBI_QUOTA_NEED_FLUSH);
|
||||
|
||||
up_read(&sbi->quota_sem);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int f2fs_dquot_commit_info(struct super_block *sb, int type)
|
||||
{
|
||||
struct f2fs_sb_info *sbi = F2FS_SB(sb);
|
||||
int ret;
|
||||
int ret = dquot_commit_info(sb, type);
|
||||
|
||||
down_read(&sbi->quota_sem);
|
||||
ret = dquot_commit_info(sb, type);
|
||||
if (ret < 0)
|
||||
set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
|
||||
up_read(&sbi->quota_sem);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -3311,7 +3398,7 @@ try_onemore:
|
|||
|
||||
/* init f2fs-specific super block info */
|
||||
sbi->valid_super_block = valid_super_block;
|
||||
mutex_init(&sbi->gc_mutex);
|
||||
init_rwsem(&sbi->gc_lock);
|
||||
mutex_init(&sbi->writepages);
|
||||
mutex_init(&sbi->cp_mutex);
|
||||
mutex_init(&sbi->resize_mutex);
|
||||
|
@ -3400,6 +3487,12 @@ try_onemore:
|
|||
goto free_devices;
|
||||
}
|
||||
|
||||
err = f2fs_init_post_read_wq(sbi);
|
||||
if (err) {
|
||||
f2fs_err(sbi, "Failed to initialize post read workqueue");
|
||||
goto free_devices;
|
||||
}
|
||||
|
||||
sbi->total_valid_node_count =
|
||||
le32_to_cpu(sbi->ckpt->valid_node_count);
|
||||
percpu_counter_set(&sbi->total_valid_inode_count,
|
||||
|
@ -3544,6 +3637,17 @@ try_onemore:
|
|||
goto free_meta;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If the f2fs is not readonly and fsync data recovery succeeds,
|
||||
* check zoned block devices' write pointer consistency.
|
||||
*/
|
||||
if (!err && !f2fs_readonly(sb) && f2fs_sb_has_blkzoned(sbi)) {
|
||||
err = f2fs_check_write_pointer(sbi);
|
||||
if (err)
|
||||
goto free_meta;
|
||||
}
|
||||
|
||||
reset_checkpoint:
|
||||
/* f2fs_recover_fsync_data() cleared this already */
|
||||
clear_sbi_flag(sbi, SBI_POR_DOING);
|
||||
|
@ -3621,6 +3725,7 @@ free_nm:
|
|||
f2fs_destroy_node_manager(sbi);
|
||||
free_sm:
|
||||
f2fs_destroy_segment_manager(sbi);
|
||||
f2fs_destroy_post_read_wq(sbi);
|
||||
free_devices:
|
||||
destroy_device_list(sbi);
|
||||
kvfree(sbi->ckpt);
|
||||
|
@ -3762,8 +3867,12 @@ static int __init init_f2fs_fs(void)
|
|||
err = f2fs_init_bio_entry_cache();
|
||||
if (err)
|
||||
goto free_post_read;
|
||||
err = f2fs_init_bioset();
|
||||
if (err)
|
||||
goto free_bio_enrty_cache;
|
||||
return 0;
|
||||
|
||||
free_bio_enrty_cache:
|
||||
f2fs_destroy_bio_entry_cache();
|
||||
free_post_read:
|
||||
f2fs_destroy_post_read_processing();
|
||||
free_root_stats:
|
||||
|
@ -3789,6 +3898,7 @@ fail:
|
|||
|
||||
static void __exit exit_f2fs_fs(void)
|
||||
{
|
||||
f2fs_destroy_bioset();
|
||||
f2fs_destroy_bio_entry_cache();
|
||||
f2fs_destroy_post_read_processing();
|
||||
f2fs_destroy_root_stats();
|
||||
|
|
158
fs/f2fs/sysfs.c
158
fs/f2fs/sysfs.c
|
@ -25,6 +25,9 @@ enum {
|
|||
DCC_INFO, /* struct discard_cmd_control */
|
||||
NM_INFO, /* struct f2fs_nm_info */
|
||||
F2FS_SBI, /* struct f2fs_sb_info */
|
||||
#ifdef CONFIG_F2FS_STAT_FS
|
||||
STAT_INFO, /* struct f2fs_stat_info */
|
||||
#endif
|
||||
#ifdef CONFIG_F2FS_FAULT_INJECTION
|
||||
FAULT_INFO_RATE, /* struct f2fs_fault_info */
|
||||
FAULT_INFO_TYPE, /* struct f2fs_fault_info */
|
||||
|
@ -42,6 +45,9 @@ struct f2fs_attr {
|
|||
int id;
|
||||
};
|
||||
|
||||
static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf);
|
||||
|
||||
static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type)
|
||||
{
|
||||
if (struct_type == GC_THREAD)
|
||||
|
@ -58,6 +64,10 @@ static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type)
|
|||
else if (struct_type == FAULT_INFO_RATE ||
|
||||
struct_type == FAULT_INFO_TYPE)
|
||||
return (unsigned char *)&F2FS_OPTION(sbi).fault_info;
|
||||
#endif
|
||||
#ifdef CONFIG_F2FS_STAT_FS
|
||||
else if (struct_type == STAT_INFO)
|
||||
return (unsigned char *)F2FS_STAT(sbi);
|
||||
#endif
|
||||
return NULL;
|
||||
}
|
||||
|
@ -65,35 +75,15 @@ static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type)
|
|||
static ssize_t dirty_segments_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
(unsigned long long)(dirty_segments(sbi)));
|
||||
return sprintf(buf, "%llu\n",
|
||||
(unsigned long long)(dirty_segments(sbi)));
|
||||
}
|
||||
|
||||
static ssize_t unusable_show(struct f2fs_attr *a,
|
||||
static ssize_t free_segments_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
block_t unusable;
|
||||
|
||||
if (test_opt(sbi, DISABLE_CHECKPOINT))
|
||||
unusable = sbi->unusable_block_count;
|
||||
else
|
||||
unusable = f2fs_get_unusable_blocks(sbi);
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
(unsigned long long)unusable);
|
||||
}
|
||||
|
||||
static ssize_t encoding_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
#ifdef CONFIG_UNICODE
|
||||
if (f2fs_sb_has_casefold(sbi))
|
||||
return snprintf(buf, PAGE_SIZE, "%s (%d.%d.%d)\n",
|
||||
sbi->s_encoding->charset,
|
||||
(sbi->s_encoding->version >> 16) & 0xff,
|
||||
(sbi->s_encoding->version >> 8) & 0xff,
|
||||
sbi->s_encoding->version & 0xff);
|
||||
#endif
|
||||
return snprintf(buf, PAGE_SIZE, "(none)");
|
||||
return sprintf(buf, "%llu\n",
|
||||
(unsigned long long)(free_segments(sbi)));
|
||||
}
|
||||
|
||||
static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
|
||||
|
@ -102,10 +92,10 @@ static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
|
|||
struct super_block *sb = sbi->sb;
|
||||
|
||||
if (!sb->s_bdev->bd_part)
|
||||
return snprintf(buf, PAGE_SIZE, "0\n");
|
||||
return sprintf(buf, "0\n");
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%llu\n",
|
||||
(unsigned long long)(sbi->kbytes_written +
|
||||
return sprintf(buf, "%llu\n",
|
||||
(unsigned long long)(sbi->kbytes_written +
|
||||
BD_PART_WRITTEN(sbi)));
|
||||
}
|
||||
|
||||
|
@ -116,7 +106,7 @@ static ssize_t features_show(struct f2fs_attr *a,
|
|||
int len = 0;
|
||||
|
||||
if (!sb->s_bdev->bd_part)
|
||||
return snprintf(buf, PAGE_SIZE, "0\n");
|
||||
return sprintf(buf, "0\n");
|
||||
|
||||
if (f2fs_sb_has_encrypt(sbi))
|
||||
len += snprintf(buf, PAGE_SIZE - len, "%s",
|
||||
|
@ -154,6 +144,9 @@ static ssize_t features_show(struct f2fs_attr *a,
|
|||
if (f2fs_sb_has_casefold(sbi))
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
|
||||
len ? ", " : "", "casefold");
|
||||
if (f2fs_sb_has_compression(sbi))
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
|
||||
len ? ", " : "", "compression");
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
|
||||
len ? ", " : "", "pin_file");
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
|
||||
|
@ -163,9 +156,66 @@ static ssize_t features_show(struct f2fs_attr *a,
|
|||
static ssize_t current_reserved_blocks_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
return snprintf(buf, PAGE_SIZE, "%u\n", sbi->current_reserved_blocks);
|
||||
return sprintf(buf, "%u\n", sbi->current_reserved_blocks);
|
||||
}
|
||||
|
||||
static ssize_t unusable_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
block_t unusable;
|
||||
|
||||
if (test_opt(sbi, DISABLE_CHECKPOINT))
|
||||
unusable = sbi->unusable_block_count;
|
||||
else
|
||||
unusable = f2fs_get_unusable_blocks(sbi);
|
||||
return sprintf(buf, "%llu\n", (unsigned long long)unusable);
|
||||
}
|
||||
|
||||
static ssize_t encoding_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
#ifdef CONFIG_UNICODE
|
||||
if (f2fs_sb_has_casefold(sbi))
|
||||
return snprintf(buf, PAGE_SIZE, "%s (%d.%d.%d)\n",
|
||||
sbi->s_encoding->charset,
|
||||
(sbi->s_encoding->version >> 16) & 0xff,
|
||||
(sbi->s_encoding->version >> 8) & 0xff,
|
||||
sbi->s_encoding->version & 0xff);
|
||||
#endif
|
||||
return sprintf(buf, "(none)");
|
||||
}
|
||||
|
||||
#ifdef CONFIG_F2FS_STAT_FS
|
||||
static ssize_t moved_blocks_foreground_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
struct f2fs_stat_info *si = F2FS_STAT(sbi);
|
||||
|
||||
return sprintf(buf, "%llu\n",
|
||||
(unsigned long long)(si->tot_blks -
|
||||
(si->bg_data_blks + si->bg_node_blks)));
|
||||
}
|
||||
|
||||
static ssize_t moved_blocks_background_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
struct f2fs_stat_info *si = F2FS_STAT(sbi);
|
||||
|
||||
return sprintf(buf, "%llu\n",
|
||||
(unsigned long long)(si->bg_data_blks + si->bg_node_blks));
|
||||
}
|
||||
|
||||
static ssize_t avg_vblocks_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
struct f2fs_stat_info *si = F2FS_STAT(sbi);
|
||||
|
||||
si->dirty_count = dirty_segments(sbi);
|
||||
f2fs_update_sit_info(sbi);
|
||||
return sprintf(buf, "%llu\n", (unsigned long long)(si->avg_vblocks));
|
||||
}
|
||||
#endif
|
||||
|
||||
static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
|
||||
struct f2fs_sb_info *sbi, char *buf)
|
||||
{
|
||||
|
@ -199,7 +249,7 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
|
|||
|
||||
ui = (unsigned int *)(ptr + a->offset);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%u\n", *ui);
|
||||
return sprintf(buf, "%u\n", *ui);
|
||||
}
|
||||
|
||||
static ssize_t __sbi_store(struct f2fs_attr *a,
|
||||
|
@ -389,6 +439,7 @@ enum feat_id {
|
|||
FEAT_VERITY,
|
||||
FEAT_SB_CHECKSUM,
|
||||
FEAT_CASEFOLD,
|
||||
FEAT_COMPRESSION,
|
||||
};
|
||||
|
||||
static ssize_t f2fs_feature_show(struct f2fs_attr *a,
|
||||
|
@ -408,7 +459,8 @@ static ssize_t f2fs_feature_show(struct f2fs_attr *a,
|
|||
case FEAT_VERITY:
|
||||
case FEAT_SB_CHECKSUM:
|
||||
case FEAT_CASEFOLD:
|
||||
return snprintf(buf, PAGE_SIZE, "supported\n");
|
||||
case FEAT_COMPRESSION:
|
||||
return sprintf(buf, "supported\n");
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -437,6 +489,14 @@ static struct f2fs_attr f2fs_attr_##_name = { \
|
|||
.id = _id, \
|
||||
}
|
||||
|
||||
#define F2FS_STAT_ATTR(_struct_type, _struct_name, _name, _elname) \
|
||||
static struct f2fs_attr f2fs_attr_##_name = { \
|
||||
.attr = {.name = __stringify(_name), .mode = 0444 }, \
|
||||
.show = f2fs_sbi_show, \
|
||||
.struct_type = _struct_type, \
|
||||
.offset = offsetof(struct _struct_name, _elname), \
|
||||
}
|
||||
|
||||
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_urgent_sleep_time,
|
||||
urgent_sleep_time);
|
||||
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_min_sleep_time, min_sleep_time);
|
||||
|
@ -478,11 +538,21 @@ F2FS_RW_ATTR(FAULT_INFO_RATE, f2fs_fault_info, inject_rate, inject_rate);
|
|||
F2FS_RW_ATTR(FAULT_INFO_TYPE, f2fs_fault_info, inject_type, inject_type);
|
||||
#endif
|
||||
F2FS_GENERAL_RO_ATTR(dirty_segments);
|
||||
F2FS_GENERAL_RO_ATTR(free_segments);
|
||||
F2FS_GENERAL_RO_ATTR(lifetime_write_kbytes);
|
||||
F2FS_GENERAL_RO_ATTR(features);
|
||||
F2FS_GENERAL_RO_ATTR(current_reserved_blocks);
|
||||
F2FS_GENERAL_RO_ATTR(unusable);
|
||||
F2FS_GENERAL_RO_ATTR(encoding);
|
||||
#ifdef CONFIG_F2FS_STAT_FS
|
||||
F2FS_STAT_ATTR(STAT_INFO, f2fs_stat_info, cp_foreground_calls, cp_count);
|
||||
F2FS_STAT_ATTR(STAT_INFO, f2fs_stat_info, cp_background_calls, bg_cp_count);
|
||||
F2FS_STAT_ATTR(STAT_INFO, f2fs_stat_info, gc_foreground_calls, call_count);
|
||||
F2FS_STAT_ATTR(STAT_INFO, f2fs_stat_info, gc_background_calls, bg_gc);
|
||||
F2FS_GENERAL_RO_ATTR(moved_blocks_background);
|
||||
F2FS_GENERAL_RO_ATTR(moved_blocks_foreground);
|
||||
F2FS_GENERAL_RO_ATTR(avg_vblocks);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_FS_ENCRYPTION
|
||||
F2FS_FEATURE_RO_ATTR(encryption, FEAT_CRYPTO);
|
||||
|
@ -503,6 +573,7 @@ F2FS_FEATURE_RO_ATTR(verity, FEAT_VERITY);
|
|||
#endif
|
||||
F2FS_FEATURE_RO_ATTR(sb_checksum, FEAT_SB_CHECKSUM);
|
||||
F2FS_FEATURE_RO_ATTR(casefold, FEAT_CASEFOLD);
|
||||
F2FS_FEATURE_RO_ATTR(compression, FEAT_COMPRESSION);
|
||||
|
||||
#define ATTR_LIST(name) (&f2fs_attr_##name.attr)
|
||||
static struct attribute *f2fs_attrs[] = {
|
||||
|
@ -543,12 +614,22 @@ static struct attribute *f2fs_attrs[] = {
|
|||
ATTR_LIST(inject_type),
|
||||
#endif
|
||||
ATTR_LIST(dirty_segments),
|
||||
ATTR_LIST(free_segments),
|
||||
ATTR_LIST(unusable),
|
||||
ATTR_LIST(lifetime_write_kbytes),
|
||||
ATTR_LIST(features),
|
||||
ATTR_LIST(reserved_blocks),
|
||||
ATTR_LIST(current_reserved_blocks),
|
||||
ATTR_LIST(encoding),
|
||||
#ifdef CONFIG_F2FS_STAT_FS
|
||||
ATTR_LIST(cp_foreground_calls),
|
||||
ATTR_LIST(cp_background_calls),
|
||||
ATTR_LIST(gc_foreground_calls),
|
||||
ATTR_LIST(gc_background_calls),
|
||||
ATTR_LIST(moved_blocks_foreground),
|
||||
ATTR_LIST(moved_blocks_background),
|
||||
ATTR_LIST(avg_vblocks),
|
||||
#endif
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(f2fs);
|
||||
|
@ -573,6 +654,7 @@ static struct attribute *f2fs_feat_attrs[] = {
|
|||
#endif
|
||||
ATTR_LIST(sb_checksum),
|
||||
ATTR_LIST(casefold),
|
||||
ATTR_LIST(compression),
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(f2fs_feat);
|
||||
|
@ -733,10 +815,12 @@ int __init f2fs_init_sysfs(void)
|
|||
|
||||
ret = kobject_init_and_add(&f2fs_feat, &f2fs_feat_ktype,
|
||||
NULL, "features");
|
||||
if (ret)
|
||||
if (ret) {
|
||||
kobject_put(&f2fs_feat);
|
||||
kset_unregister(&f2fs_kset);
|
||||
else
|
||||
} else {
|
||||
f2fs_proc_root = proc_mkdir("fs/f2fs", NULL);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -757,8 +841,11 @@ int f2fs_register_sysfs(struct f2fs_sb_info *sbi)
|
|||
init_completion(&sbi->s_kobj_unregister);
|
||||
err = kobject_init_and_add(&sbi->s_kobj, &f2fs_sb_ktype, NULL,
|
||||
"%s", sb->s_id);
|
||||
if (err)
|
||||
if (err) {
|
||||
kobject_put(&sbi->s_kobj);
|
||||
wait_for_completion(&sbi->s_kobj_unregister);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (f2fs_proc_root)
|
||||
sbi->s_proc = proc_mkdir(sb->s_id, f2fs_proc_root);
|
||||
|
@ -786,4 +873,5 @@ void f2fs_unregister_sysfs(struct f2fs_sb_info *sbi)
|
|||
remove_proc_entry(sbi->sb->s_id, f2fs_proc_root);
|
||||
}
|
||||
kobject_del(&sbi->s_kobj);
|
||||
kobject_put(&sbi->s_kobj);
|
||||
}
|
||||
|
|
|
@ -23,6 +23,7 @@
|
|||
|
||||
#define NULL_ADDR ((block_t)0) /* used as block_t addresses */
|
||||
#define NEW_ADDR ((block_t)-1) /* used as block_t addresses */
|
||||
#define COMPRESS_ADDR ((block_t)-2) /* used as compressed data flag */
|
||||
|
||||
#define F2FS_BYTES_TO_BLK(bytes) ((bytes) >> F2FS_BLKSIZE_BITS)
|
||||
#define F2FS_BLK_TO_BYTES(blk) ((blk) << F2FS_BLKSIZE_BITS)
|
||||
|
@ -271,6 +272,10 @@ struct f2fs_inode {
|
|||
__le32 i_inode_checksum;/* inode meta checksum */
|
||||
__le64 i_crtime; /* creation time */
|
||||
__le32 i_crtime_nsec; /* creation time in nano scale */
|
||||
__le64 i_compr_blocks; /* # of compressed blocks */
|
||||
__u8 i_compress_algorithm; /* compress algorithm */
|
||||
__u8 i_log_cluster_size; /* log of cluster size */
|
||||
__le16 i_padding; /* padding */
|
||||
__le32 i_extra_end[0]; /* for attribute size calculation */
|
||||
} __packed;
|
||||
__le32 i_addr[DEF_ADDRS_PER_INODE]; /* Pointers to data blocks */
|
||||
|
|
|
@ -49,6 +49,7 @@ TRACE_DEFINE_ENUM(CP_SYNC);
|
|||
TRACE_DEFINE_ENUM(CP_RECOVERY);
|
||||
TRACE_DEFINE_ENUM(CP_DISCARD);
|
||||
TRACE_DEFINE_ENUM(CP_TRIMMED);
|
||||
TRACE_DEFINE_ENUM(CP_PAUSE);
|
||||
|
||||
#define show_block_type(type) \
|
||||
__print_symbolic(type, \
|
||||
|
@ -124,13 +125,14 @@ TRACE_DEFINE_ENUM(CP_TRIMMED);
|
|||
{ CP_SYNC, "Sync" }, \
|
||||
{ CP_RECOVERY, "Recovery" }, \
|
||||
{ CP_DISCARD, "Discard" }, \
|
||||
{ CP_UMOUNT, "Umount" }, \
|
||||
{ CP_PAUSE, "Pause" }, \
|
||||
{ CP_TRIMMED, "Trimmed" })
|
||||
|
||||
#define show_fsync_cpreason(type) \
|
||||
__print_symbolic(type, \
|
||||
{ CP_NO_NEEDED, "no needed" }, \
|
||||
{ CP_NON_REGULAR, "non regular" }, \
|
||||
{ CP_COMPRESSED, "compreesed" }, \
|
||||
{ CP_HARDLINK, "hardlink" }, \
|
||||
{ CP_SB_NEED_CP, "sb needs cp" }, \
|
||||
{ CP_WRONG_PINO, "wrong pino" }, \
|
||||
|
@ -148,6 +150,11 @@ TRACE_DEFINE_ENUM(CP_TRIMMED);
|
|||
{ F2FS_GOING_DOWN_METAFLUSH, "meta flush" }, \
|
||||
{ F2FS_GOING_DOWN_NEED_FSCK, "need fsck" })
|
||||
|
||||
#define show_compress_algorithm(type) \
|
||||
__print_symbolic(type, \
|
||||
{ COMPRESS_LZO, "LZO" }, \
|
||||
{ COMPRESS_LZ4, "LZ4" })
|
||||
|
||||
struct f2fs_sb_info;
|
||||
struct f2fs_io_info;
|
||||
struct extent_info;
|
||||
|
@ -1710,6 +1717,100 @@ TRACE_EVENT(f2fs_shutdown,
|
|||
__entry->ret)
|
||||
);
|
||||
|
||||
DECLARE_EVENT_CLASS(f2fs_zip_start,
|
||||
|
||||
TP_PROTO(struct inode *inode, pgoff_t cluster_idx,
|
||||
unsigned int cluster_size, unsigned char algtype),
|
||||
|
||||
TP_ARGS(inode, cluster_idx, cluster_size, algtype),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(dev_t, dev)
|
||||
__field(ino_t, ino)
|
||||
__field(pgoff_t, idx)
|
||||
__field(unsigned int, size)
|
||||
__field(unsigned int, algtype)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->dev = inode->i_sb->s_dev;
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->idx = cluster_idx;
|
||||
__entry->size = cluster_size;
|
||||
__entry->algtype = algtype;
|
||||
),
|
||||
|
||||
TP_printk("dev = (%d,%d), ino = %lu, cluster_idx:%lu, "
|
||||
"cluster_size = %u, algorithm = %s",
|
||||
show_dev_ino(__entry),
|
||||
__entry->idx,
|
||||
__entry->size,
|
||||
show_compress_algorithm(__entry->algtype))
|
||||
);
|
||||
|
||||
DECLARE_EVENT_CLASS(f2fs_zip_end,
|
||||
|
||||
TP_PROTO(struct inode *inode, pgoff_t cluster_idx,
|
||||
unsigned int compressed_size, int ret),
|
||||
|
||||
TP_ARGS(inode, cluster_idx, compressed_size, ret),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(dev_t, dev)
|
||||
__field(ino_t, ino)
|
||||
__field(pgoff_t, idx)
|
||||
__field(unsigned int, size)
|
||||
__field(unsigned int, ret)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->dev = inode->i_sb->s_dev;
|
||||
__entry->ino = inode->i_ino;
|
||||
__entry->idx = cluster_idx;
|
||||
__entry->size = compressed_size;
|
||||
__entry->ret = ret;
|
||||
),
|
||||
|
||||
TP_printk("dev = (%d,%d), ino = %lu, cluster_idx:%lu, "
|
||||
"compressed_size = %u, ret = %d",
|
||||
show_dev_ino(__entry),
|
||||
__entry->idx,
|
||||
__entry->size,
|
||||
__entry->ret)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(f2fs_zip_start, f2fs_compress_pages_start,
|
||||
|
||||
TP_PROTO(struct inode *inode, pgoff_t cluster_idx,
|
||||
unsigned int cluster_size, unsigned char algtype),
|
||||
|
||||
TP_ARGS(inode, cluster_idx, cluster_size, algtype)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(f2fs_zip_start, f2fs_decompress_pages_start,
|
||||
|
||||
TP_PROTO(struct inode *inode, pgoff_t cluster_idx,
|
||||
unsigned int cluster_size, unsigned char algtype),
|
||||
|
||||
TP_ARGS(inode, cluster_idx, cluster_size, algtype)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(f2fs_zip_end, f2fs_compress_pages_end,
|
||||
|
||||
TP_PROTO(struct inode *inode, pgoff_t cluster_idx,
|
||||
unsigned int compressed_size, int ret),
|
||||
|
||||
TP_ARGS(inode, cluster_idx, compressed_size, ret)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(f2fs_zip_end, f2fs_decompress_pages_end,
|
||||
|
||||
TP_PROTO(struct inode *inode, pgoff_t cluster_idx,
|
||||
unsigned int compressed_size, int ret),
|
||||
|
||||
TP_ARGS(inode, cluster_idx, compressed_size, ret)
|
||||
);
|
||||
|
||||
#endif /* _TRACE_F2FS_H */
|
||||
|
||||
/* This part must be outside protection */
|
||||
|
|
Загрузка…
Ссылка в новой задаче