Once f2fs_gc is done, wait_ms is changed once more.
So, its tracepoint would be located after it.
Reported-by: He YunLei <heyunlei@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
different competitors
Since we use different page cache (normally inode's page cache for R/W
and meta inode's page cache for GC) to cache the same physical block
which is belong to an encrypted inode. Writeback of these two page
cache should be exclusive, but now we didn't handle writeback state
well, so there may be potential racing problem:
a)
kworker: f2fs_gc:
- f2fs_write_data_pages
- f2fs_write_data_page
- do_write_data_page
- write_data_page
- f2fs_submit_page_mbio
(page#1 in inode's page cache was queued
in f2fs bio cache, and be ready to write
to new blkaddr)
- gc_data_segment
- move_encrypted_block
- pagecache_get_page
(page#2 in meta inode's page cache
was cached with the invalid datas
of physical block located in new
blkaddr)
- f2fs_submit_page_mbio
(page#1 was submitted, later, page#2
with invalid data will be submitted)
b)
f2fs_gc:
- gc_data_segment
- move_encrypted_block
- f2fs_submit_page_mbio
(page#1 in meta inode's page cache was
queued in f2fs bio cache, and be ready
to write to new blkaddr)
user thread:
- f2fs_write_begin
- f2fs_submit_page_bio
(we submit the request to block layer
to update page#2 in inode's page cache
with physical block located in new
blkaddr, so here we may read gabbage
data from new blkaddr since GC hasn't
writebacked the page#1 yet)
This patch fixes above potential racing problem for encrypted inode.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
After finishing building free nid cache, we will try to readahead
asynchronously 4 more pages for the next reloading, the count of
readahead nid pages is fixed.
In some case, like SMR drive, read less sectors with fixed count
each time we trigger RA may be low efficient, since we will face
high seeking overhead, so we'd better let user to configure this
parameter from sysfs in specific workload.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
When there is no free nid in nid cache, all new node allocaters stop their
job to wait for reloading of free nids, however reloading is synchronous as
we will read 4 NAT pages for building nid cache, it cause the long latency.
This patch tries to readahead more NAT pages with READA request flag after
reloading of free nids. It helps to improve performance when users allocate
node id intensively.
Env: Sandisk 32G sd card
time for i in `seq 1 60000`; { echo -n > /mnt/f2fs/$i; echo XXXXXX > /mnt/f2fs/$i;}
Before:
real 0m2.814s
user 0m1.220s
sys 0m1.536s
After:
real 0m2.711s
user 0m1.136s
sys 0m1.568s
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Now, we use ra_meta_pages to reads continuous physical blocks as much as
possible to improve performance of following reads. However, ra_meta_pages
uses a synchronous readahead approach by submitting bio with READ, as READ
is with high priority, it can not be used in the case of preloading blocks,
and it's not sure when these RAed pages will be used.
This patch supports asynchronous readahead in ra_meta_pages by tagging bio
with READA flag in order to allow preloading.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
In recovery or checkpoint flow, we grab pages temperarily in meta inode's
mapping for caching temperary data, actually, datas in these pages were
not meta data of f2fs, but still we tag them with REQ_META flag. However,
lower device like eMMC may do some optimization for data of such type.
So in order to avoid wrong optimization, we'd better remove such flag
for temperary non-meta pages.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch adds a tracepoint for f2fs_read_data_pages to trace when pages
are readahead by VFS.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
For normal inodes, their pages are allocated with __GFP_FS, which can cause
filesystem calls when reclaiming memory.
This can incur a dead lock condition accordingly.
So, this patch addresses this problem by introducing
f2fs_grab_cache_page(.., bool for_write), which calls
grab_cache_page_write_begin() with AOP_FLAG_NOFS.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
The f2fs_collapse_range and f2fs_insert_range changes the block addresses
directly. But that can cause uncovered SSA updates.
In that case, we need to give up to change the block addresses and do buffered
writes to keep filesystem consistency.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
The periodic checkpoint can resolve the previous issue.
So, now we can use this again to improve the reported performance regression:
https://lkml.org/lkml/2015/10/8/20
This reverts commit 15bec0ff5a9ba6d203178fa8772259df6207942a.
This patch introduces F2FS_GOING_DOWN_METAFLUSH which flushes meta pages like
SSA blocks and then blocks all the writes.
This can be used by power-failure tests.
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch introduces a periodic checkpoint feature.
Note that, this is not enforcing to conduct checkpoints very strictly in terms
of trigger timing, instead just hope to help user experiences.
The default value is 60 seconds.
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch introduces a tracepoint to monitor background gc behaviors.
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch introduce a new ioctl for those users who want to trigger
checkpoint from userspace through ioctl.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch drops in batches gc triggered through ioctl, since user
can easily control the gc by designing the loop around the ->ioctl.
We support synchronous gc by forcing using FG_GC in f2fs_gc, so with
it, user can make sure that in this round all blocks gced were
persistent in the device until ioctl returned.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
When searching victim during gc, if there are no dirty segments in
filesystem, we will still take the time to search the whole dirty segment
map, it's not needed, it's better to skip in this condition.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
When doing gc, we search a victim in dirty map, starting from position of
last victim, we will reset the current searching position until we touch
the end of dirty map, and then search the whole diryt map. So sometimes we
will search the range [victim, last] twice, it's redundant, this patch
avoids this issue.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Our hit stat of extent cache will increase all the time until remount,
and we use atomic_t type for the stat variable, so it may easily incur
overflow when we query extent cache frequently in a long time running
fs.
So to avoid that, this patch uses atomic64_t for hit stat variables.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
If we do not call get_victim first, we cannot get a new victim for retrial
path.
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch fixes to maintain the right section count freed in garbage
collecting when triggering a foreground gc.
Besides, when a foreground gc is running on current selected section, once
we fail to gc one segment, it's better to abandon gcing the left segments
in current section, because anyway we will select next victim for
foreground gc, so gc on the left segments in previous section will become
overhead and also cause the long latency for caller.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch fixes to update ctime and atime correctly when truncating
larger in ->setattr.
The bug is reported by xfstest generic/313 as below:
generic/313 2s ... - output mismatch (see ./results/generic/313.out.bad)
--- tests/generic/313.out 2015-08-04 15:28:53.430798882 +0800
+++ results/generic/313.out.bad 2015-09-28 17:04:27.294278016 +0800
@@ -1,2 +1,4 @@
QA output created by 313
Silence is golden
+ctime not updated after truncate up
+mtime not updated after truncate up
...
(Run 'diff -u tests/generic/313.out tests/generic/313.out.bad' to see the entire diff)
Ran: generic/313
Failures: generic/313
Failed 1 of 1 tests
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Previously, we skip dentry block writes when wbc is SYNC_NONE with no memory
pressure and the number of dirty pages is pretty small.
But, we didn't skip for normal data writes, which gives us not much big impact
on overall performance.
Moreover, by skipping some data writes, kworker falls into infinite loop to try
to write blocks, when many dir inodes have only one dentry block.
So, this patch removes skipping data writes.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Protecting recovery flow by using cp_rwsem is not needed, since we have
prevent triggering any checkpoint by locking cp_mutex previously.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
In update_sit_info, we use div_u64 to handle 'u64 divide u64' case, but
div_u64 can only handle 32-bits divisor, so our divisor with u64 type
passed to div_u64 will overflow, result in the wrong calculation when
show debug info of f2fs as below:
BDF: 464, avg. vblocks: 23509
(BDF should never exceed 100)
So change to use div64_u64 to handle this case correctly.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch adds a new helper __try_update_largest_extent for cleanup.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This fixes error handling for calls to various functions in the
function recover_inline_data to check if these particular functions
either return a error code or the boolean value false to signal their
caller they have failed internally and if this arises return false
to signal failure immediately to the caller of recover_inline_data
as we cannot continue after failures to calling either the function
truncate_inline_inode or truncate_blocks.
Signed-off-by: Nicholas Krause <xerofoify@gmail.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Swith extent_cache option dynamically when remount may casue consistency
issue between extent cache and dnode page. Fix in this patch to avoid
that condition.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
We introduce F2FS_GET_BLOCK_READ in commit e2b4e2bc88 ("f2fs: fix
incorrect mapping for bmap"), but forget to use this flag in the right
place, fix it.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
truncate_data_blocks_range can do in batches truncation which makes all
changes in dnode page content, dnode page status, extent cache, block
count updating together.
But previously, truncate_hole() always truncates one block in dnode page
at a time by invoking truncate_data_blocks_range(,1), which make thing
slow.
This patch changes truncate_hole() to do in batches truncation for all
target blocks in one direct node inside truncate_data_blocks_range, which
can make our punch hole operation in ->fallocate more efficent.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Fix 2 potential problems:
1. when largest extent needs to be invalidated, it will be reset in
__drop_largest_extent, which makes __is_extent_same after always
return false, and largest extent unchanged. Now we update it properly.
2. when extent is split and the latter part remains in tree, next_en
should be the latter part instead of next extent of original extent.
It will cause merge failure if there is in-place update, although
there is not, I think this fix will still makes codes less ambiguous.
This patch also simplifies codes of invalidating extents, and optimizes the
procedues that split extent into two.
There are a few modifications after last patch:
1. prev_en now is updated properly.
2. more codes and branches are simplified.
Signed-off-by: Fan li <fanofcode.li@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
now we update extent by range, fofs may not be on the largest
extent if the new extent overlaps with it. so add a new function
to drop largest extent properly.
Signed-off-by: Fan li <fanofcode.li@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch avoids to produce new checkpoint blocks before the previous meta
pages were written completely.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
We got dentry pages from high_mem, and its address space directly goes into the
decryption path via f2fs_fname_disk_to_usr.
But, sg_init_one assumes the address is not from high_mem, so we can get this
panic since it doesn't call kmap_high but kunmap_high is triggered at the end.
kernel BUG at ../../../../../../kernel/mm/highmem.c:290!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
...
(kunmap_high+0xb0/0xb8) from [<c0114534>] (__kunmap_atomic+0xa0/0xa4)
(__kunmap_atomic+0xa0/0xa4) from [<c035f028>] (blkcipher_walk_done+0x128/0x1ec)
(blkcipher_walk_done+0x128/0x1ec) from [<c0366c24>] (crypto_cbc_decrypt+0xc0/0x170)
(crypto_cbc_decrypt+0xc0/0x170) from [<c0367148>] (crypto_cts_decrypt+0xc0/0x114)
(crypto_cts_decrypt+0xc0/0x114) from [<c035ea98>] (async_decrypt+0x40/0x48)
(async_decrypt+0x40/0x48) from [<c032ca34>] (f2fs_fname_disk_to_usr+0x124/0x304)
(f2fs_fname_disk_to_usr+0x124/0x304) from [<c03056fc>] (f2fs_fill_dentries+0xac/0x188)
(f2fs_fill_dentries+0xac/0x188) from [<c03059c8>] (f2fs_readdir+0x1f0/0x300)
(f2fs_readdir+0x1f0/0x300) from [<c0218054>] (vfs_readdir+0x90/0xb4)
(vfs_readdir+0x90/0xb4) from [<c0218418>] (SyS_getdents64+0x64/0xcc)
(SyS_getdents64+0x64/0xcc) from [<c0105ba0>] (ret_fast_syscall+0x0/0x30)
Cc: <stable@vger.kernel.org>
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
In this patch, we try to reorganize f2fs_map_blocks to make block mapping
flow more clear by using following structure:
/* check status of mapping */
if (unmapped) {
/* blkaddr == NULL_ADDR || blkaddr == NEW_ADDR */
if (create) {
/* write path, handle dio write case here */
alloc_and_map;
} else {
/*
* handle read cases from all call paths:
* 1. generic read;
* 2. dio read;
* 3. fiemap;
* 4. bmap
*/
}
}
/* map buffer_header */
Besides, this patch handles the missing case correctly for dio write:
When we fail in __allocate_data_blocks, then in f2fs_map_blocks, we will
not allocate blocks correctly for preallocated blocks, but returning with
an unmapped buffer head, which will result in failure of dio write.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
We have potential overflow issue when calculating size of object, when
we left shift index with PAGE_CACHE_SHIFT bits, if type of index has only
32-bits space in 32-bit architecture, left shifting will incur overflow,
i.e:
pgoff_t index = 0xFFFFFFFF;
loff_t size = index << PAGE_CACHE_SHIFT;
size: 0xFFFFF000
So we should cast index with 64-bits type to avoid this issue.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
When shrinking extent cache, we have two steps in the flow:
1) shrink objects which are unreferenced by inodes;
2) shrink objects from LRU list of extent cache.
In step 1, if we haven't shrunk enough number of objects, we will try
step 2, but before that we didn't update the searching position which
may point to last inode index in global extent tree, result in failing
to shrink objects by traversing the all inodes' extent tree.
In this patch, we reset searching position to beginning of global extent
tree for fixing.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This patch changes to verify file type early in f2fs_fallocate for
cleanup, meanwhile this also fixes to add missing verification for
expand_inode_data.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
As comment says, we don't need to call f2fs_lock_op in write_inode to prevent
from producing dirty node pages all the time.
That happens only when there is not enough free sections and we can avoid that
by calling balance_fs in prior to that.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This number is referenced by checkpoint under node_write lock.
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This fixes the incorrect return statement at the end of the function
f2fs_ioc_release_volatile_write's body for returning zero as this is
incorrect due to the function call before this return statement to
the function punch_hole being able to fail and we should return this
function's return fail directly in order to signal to callers of the
function f2fs_ioc_release_volatile if a failure arises with this call
to punch_hole fails.
Signed-off-by: Nicholas Krause <xerofoify@gmail.com>
Reviewed-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Rename trace_f2fs_update_extent_tree to trace_f2fs_update_extent_tree_range,
then expand and enable it to trace in batches extent info updates.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
- A couple of locking fixes for RT kernels
- Avoid printing bogus initrd warnings when initrd isn't present
- Performance fix for random mmap file readahead
- Typo fix
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJWFO+aAAoJELescNyEwWM0z14IALpleyenZXl+xqxMjNyOXouG
/2SbTZH7iD/vnfCL6G7/Poq00I2ghtBSRFGXajfg7V0mjH1HTfVXVN19IXaUUwjW
IfAMqSyC43dDBdsGn3A1ZqPRNk+chxjwz7zimGqPowuM87C4aj/sqetBSnuybZtB
lSYfCFGpDj8cpsJ0xwYYhuq8xUgixQMslTj+rVAFtfsLkDHUu175l+vP7t2xOv5v
bmyPlz15O/v9febnLYFVFPWZB2IWfvaFkR30qUGsMe6BGWdGDe/RGUPksZDLlPdL
Yj4AKq+9Bx0lPvO+vNEqfvScKdVjMpttVEMfi2cQ8kUbD1rRPZ7ZTHsfcEOuB1w=
=Zi+I
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"This addresses a couple of issues found with RT, a broken initrd
message in the console log and a simple performance fix for some MMC
workloads.
Summary:
- A couple of locking fixes for RT kernels
- Avoid printing bogus initrd warnings when initrd isn't present
- Performance fix for random mmap file readahead
- Typo fix"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: replace read_lock to rcu lock in call_break_hook
arm64: Don't relocate non-existent initrd
arm64: convert patch_lock to raw lock
arm64: readahead: fault retry breaks mmap file read random detection
arm64: debug: Fix typo in debug-monitors.c
* fbdev: Minor fixes to broadsheetfb, fsl-diu-fb, mb862xxfb, tridentfb, omapfb
* display-timing: Fix memory leak in error path
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJWFQlvAAoJEPo9qoy8lh711iYQAK4tBVPRPRPgNFkCWfeqaRvd
UvJcLrsVddmwFqQj+CW+PxHWqhFEcSRFaaJmWirBAtumGCAYxy2BeFjY6H32JzvK
tFVIwt1GuaO+89d3xq6AuvamiH9X4pPuu/bVfIWAP4hfpKsuKAQ+UtdPHRsRTkBe
2PVKLc4ghxPMnp0XdcDYRmJ0RrejctISmmelScc6waqhRC8+bnEcVZB5CP/N8iM9
+PvoxmS2e7Cn6ZxMMqSrEsFwpqtxuUhzrIEVrcbfAnymp37uiKai76FJe1JEev3T
gXnqXAsfMbdNjtVpHQVeb3B43ZsslEruXOQZHyhW9CH2+7QgOsVfE9PDyIEDxM+9
UZH8jLwh0EhLhnfBMBmeUmy9mQXXUhyQ5qBXJ4ubBc304tS/2LnVvh507WbzKEhY
2gk0ZmqsSzpn1+NoZ0HEkZM1toG7sWEo/KuYehyifzNrPikG6jzPqbdF64ZED5Ev
mS+fOfIus4vCXX/7cf4waDXzOhlZl+arAEOM1+FK9bE80JPZnK1xZQain7wrPSHR
uaM5AAtbPSVj3w5lk1ntY7ck/2R+DRKjwVGr8Qm7EdWnE5LQcUrS619q77Y9NJ7r
sqnfMToKarqAnfcU/4sQ2ObV8nfhsZQimeffOBVRnbMlRIo7aD+d+n4WXAICNvH+
3A/vqTo/pbWL9TBXce/E
=hZoA
-----END PGP SIGNATURE-----
Merge tag 'fbdev-fixes-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux
Pull fbdev fixes from Tomi Valkeinen:
- fbdev: Minor fixes to broadsheetfb, fsl-diu-fb, mb862xxfb, tridentfb,
omapfb
- display-timing: Fix memory leak in error path
* tag 'fbdev-fixes-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux:
video: of: fix memory leak
fbdev: broadsheetfb: fix memory leak
OMAPDSS: panel-sony-acx565akm: Export OF module alias information
fbdev: omap2: connector-dvi: use of_get_i2c_adapter_by_node interface
tridentfb: Fix set_lwidth on TGUI9440 and CYBER9320
tridentfb: fix hang on Blade3D with CONFIG_CC_OPTIMIZE_FOR_SIZE
video: fbdev: mb862xx: Fix module autoload for OF platform driver
video: fbdev: fsl: Fix the sleep function for FSL DIU module
A couple of fixes for the debugfs information on the register map,
fixing issues with very small reads potentially causing underflows and
wraparounds.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJWFPtTAAoJECTWi3JdVIfQlRUH/1j/cFl0+eDBnjQjg25+bMOn
HhGQtxcGlOXIwDzRc7vCzuH8RjqJnY6jLlYzAjbYiIEoRAHVOEtWZCeSMN8bTfaK
lUe2XN6TLrAweJYP+LC+NoIXKzW4jUDXG3ZqhkRllsnn9eoLsIFBxBV+E5jS4LNo
8AFMfdlzbKaEIrAu9w6c5pHmgwacyXETOKej+GuJ/RP6a5SRPAS1FWCznLc8gY7n
NBh+vMeYdtRV57PpXxVa/sAFjqkglwXhf4C7Gkhcv5Ecu4utf5LqL3w9i5Al5Rmz
5DxO3JaFdK4ae74wdo8iCFddXJBu+TiB9UB6mvhrZ63W8ECw3xjcE4PcYuL3BN0=
=QeA5
-----END PGP SIGNATURE-----
Merge tag 'regmap-fix-v4.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull regmap fixes from Mark Brown:
"A couple of fixes for the debugfs information on the register map,
fixing issues with very small reads potentially causing underflows and
wraparounds"
* tag 'regmap-fix-v4.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: debugfs: Don't bother actually printing when calculating max length
regmap: debugfs: Ensure we don't underflow when printing access masks
A couple of very minor fixes, one for error handling in the Davinci
driver probe function and another making the Renesas sh-msiof DT binding
documentation correspond to what's actually implemented.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJWFPjZAAoJECTWi3JdVIfQvWcH/0aOEA6dcgurWr4IlWpI9vS0
ueMrT3CEDwyJOhkizxMTQSDs7K01OGOag1Dq6nZm2k38niFD7CeUU6iNXZrKoTXf
TSBiqgRitZ5HnpYJhYz/HP8HTktAZVuFaTD2IpSX6+gqiTinIYKVX4MrnZmIcRjR
uGXosAd2n3552hXjcDVOuRlmvjmd30wP96/hc0/24yfrehEcBoksM4dgnpMDC5d4
OQaP6Y8ZXkqeYYrL+xN4i6w9wwTdt+Io+7cYZoRVKJ3+CB9jHAnONRJzTPqRWEDz
vhOA2/fHmvLierP1JySz1CZzYdVekVpPYsobmsYHKddoLXBLvlaYWJajj1kTMlc=
=RMP1
-----END PGP SIGNATURE-----
Merge tag 'spi-fix-v4.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull spi fixes from Mark Brown:
"A couple of very minor fixes, one for error handling in the Davinci
driver probe function and another making the Renesas sh-msiof DT
binding documentation correspond to what's actually implemented"
* tag 'spi-fix-v4.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
spi: sh-msiof: Match renesas,rx-fifo-size in DT bindings doc with driver
spi: davinci: fix handling platform_get_irq result