This changes DM integrity to count the number of checksum failures and
report the counter in response to STATUSTYPE_INFO request (via 'dmsetup
status').
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Rather than write the entire dm-bufio buffer when only a subset is
changed, improve dm-bufio (and dm-integrity) by only writing the subset
of the buffer that changed.
Update dm-integrity to make use of dm-bufio's new
dm_bufio_mark_partial_buffer_dirty() interface.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
We don't need to update the original dm request partially when ending
each cloned bio: just update original dm request once when the whole
cloned request is finished. This still allows full support for partial
completion because a new 'completed' counter accounts for incremental
progress as the clone bios complete.
Partial request update can be a bit expensive, so we should try to avoid
it, especially because it is run in softirq context.
Avoiding all the partial request updates fixes both hard lockup and
soft lockups that were easily reproduced while running Laurence's
test[1] on IB/SRP.
BTW, after d4acf3650c ("block: Make blk_mq_delay_kick_requeue_list()
rerun the queue at a quiet time"), we need to make the test more
aggressive for reproducing the lockup:
1) run hammer_write.sh 32 or 64 concurrently.
2) write 8M each time
[1] https://marc.info/?l=linux-block&m=150220185510245&w=2
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
DM_MAPIO_DELAY_REQUEUE causes dm-mq to requeue after a delay but
causes dm-sq to requeue immediately. Make the behavior of dm-sq
consistent with that of dm-mq.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
WARN_ONCE() if __multipath_map_bio() returns an unsupported return value.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
When using the block layer in single queue mode, get_request()
returns ERR_PTR(-EAGAIN) if the queue is dying and the REQ_NOWAIT
flag has been passed to get_request(). Avoid that the kernel
reports soft lockup complaints in this case due to continuous
requeuing activity.
Fixes: 7083abbbf ("dm mpath: avoid that path removal can trigger an infinite loop")
Cc: stable@vger.kernel.org
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Tested-by: Laurence Oberman <loberman@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Using the same rate limiting state for different kinds of messages
is wrong because this can cause a high frequency message to suppress
a report of a low frequency message. Hence use a unique rate limiting
state per message type.
Fixes: 71a16736a1 ("dm: use local printk ratelimit")
Cc: stable@vger.kernel.org
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Retry requests instead of failing them if an out-of-memory error occurs
or the block driver below dm-mpath is busy. This restores the v4.12
behavior of noretry_error(), namely that -ENOMEM results in a retry.
Fixes: 2a842acab1 ("block: introduce new block status code type")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Another fix, this time in common IOMMU sysfs code
- In the conversion from the old iommu sysfs-code to the
iommu_device_register interface, I missed to update the
release path for the struct device associated with an IOMMU.
It freed the 'struct device', which was a pointer before, but
is now embedded in another struct. Freeing from the middle of
allocated memory had all kinds of nasty side effects when an
IOMMU was unplugged. Unfortunatly nobody unplugged and IOMMU
until now, so this was not discovered earlier. The fix is to
make the 'struct device' a pointer again.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJZor4gAAoJECvwRC2XARrj3CUP/3/rhNhHfuvq0C+Nz+ikzZSg
xavSSbZztC3SQnr9bgky5TP8Djf9zXzD6DiEGhJoOr98K7R/nvFyaR1NfYnIwHi3
ofngfOcEuuNzInO9L0huHlkqlbUxEwcWTi/QbrFm+W2iL6vOgYejlspFLXAPviDo
BlSzJTHzeyXJPZqKDuKB2oO+fVk/xor7KEelsh5fsRrBwFl/JclH5SwIusv4ORfJ
sY+02Z8MfLx5+NUvSDj/APoGOlYn0T+XipvduIp2wDtQBmDvN332KWqB1JnAKVdM
j27l0BnHABbe5TjQMzj3opAl2v2ZsUqRzolfJdvrh8Gr3gLT1LyMn8A3CRzelBDI
jzNsPp9BG2z8enUrppy6yZwv95uxEvNrwrc7jmX46UK12Gf7eBlNGLSe4u+5Ctj5
5e6Eui5y5g/4/DW+BbXt+DjYZHwqJdC1+KAI9XR6sMPRweEmdLhclqgtYhTjGGX9
w2swhpWjcZ7bte8EF/Mlg2Dl6//WTcqFBeyZbHe+HwzWP33EIXpHdfwJCtWpfD/+
lvdDvI2DUrDUiMVcJwnYrWbRuHtdE/fjI0BtmYA01JL0Oe4+kxB3vS4MnlmH8ENc
i7KThAEDdyrqeX1DTPmef1YTuhprGAB/pj2GYGe/93QDXMDOPMG3pvYm6Up02MzD
a2UNL/JvCEQloABXFyIM
=/lXQ
-----END PGP SIGNATURE-----
Merge tag 'iommu-fixes-v4.13-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU fix from Joerg Roedel:
"Another fix, this time in common IOMMU sysfs code.
In the conversion from the old iommu sysfs-code to the
iommu_device_register interface, I missed to update the release path
for the struct device associated with an IOMMU. It freed the 'struct
device', which was a pointer before, but is now embedded in another
struct.
Freeing from the middle of allocated memory had all kinds of nasty
side effects when an IOMMU was unplugged. Unfortunatly nobody
unplugged and IOMMU until now, so this was not discovered earlier. The
fix is to make the 'struct device' a pointer again"
* tag 'iommu-fixes-v4.13-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu: Fix wrong freeing of iommu_device->dev
Here is a single misc driver fix for 4.13-rc7. It resolves a reported
problem in the Android binder driver due to previous patches in 4.13-rc.
It's been in linux-next with no reported issues.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWaJyTQ8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+yk3GgCgi/suT2Mqfun8Ohmz9i4fMwjJ7UwAn2s3XxeH
3b+zwqeZD1+zB/w6hZ2v
=9B01
-----END PGP SIGNATURE-----
Merge tag 'char-misc-4.13-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull char/misc fix from Greg KH:
"Here is a single misc driver fix for 4.13-rc7. It resolves a reported
problem in the Android binder driver due to previous patches in
4.13-rc.
It's been in linux-next with no reported issues"
* tag 'char-misc-4.13-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
ANDROID: binder: fix proc->tsk check.
Here are few small staging driver fixes, and some more IIO driver fixes
for 4.13-rc7. Nothing major, just resolutions for some reported
problems.
All of these have been in linux-next with no reported problems.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWaJy4A8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+ynWcACgxpL4f0LeykFayPprtrciey5OOGoAnAhfG7Lq
LCuaIj8AtUVfwoWXVwBA
=RSsO
-----END PGP SIGNATURE-----
Merge tag 'staging-4.13-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging
Pull staging/iio fixes from Greg KH:
"Here are few small staging driver fixes, and some more IIO driver
fixes for 4.13-rc7. Nothing major, just resolutions for some reported
problems.
All of these have been in linux-next with no reported problems"
* tag 'staging-4.13-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging:
iio: magnetometer: st_magn: remove ihl property for LSM303AGR
iio: magnetometer: st_magn: fix status register address for LSM303AGR
iio: hid-sensor-trigger: Fix the race with user space powering up sensors
iio: trigger: stm32-timer: fix get trigger mode
iio: imu: adis16480: Fix acceleration scale factor for adis16480
PATCH] iio: Fix some documentation warnings
staging: rtl8188eu: add RNX-N150NUB support
Revert "staging: fsl-mc: be consistent when checking strcmp() return"
iio: adc: stm32: fix common clock rate
iio: adc: ina219: Avoid underflow for sleeping time
iio: trigger: stm32-timer: add enable attribute
iio: trigger: stm32-timer: fix get/set down count direction
iio: trigger: stm32-timer: fix write_raw return value
iio: trigger: stm32-timer: fix quadrature mode get routine
iio: bmp280: properly initialize device for humidity reading
transport, improperly bringing down the link if SPADs are corrupted, and
an out-of-order issue regarding link negotiation and data passing.
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJZokFeAAoJEG5mS6x6i9IjwS8P/1fFjRt6q4Xr+/PGgYZzY+OH
7Rnbhx89PuWECJuh0k2r2L0R4IsXueejTRkQyjE++AffwcuidnYdgHZwSUSgA3MR
PuOXNA7PCRe1DW6BDe+Uvwigx+RUlQltQFihopi9YITu667/YlSNu2MWplpQxbTo
RKDh2WhiI5SGsFtfS1CPkxtcvOqJEelR5yFuT6LUazw7EYbpjWBiRwTx5SovcncV
bmLQEPSvOe1+HMJza1kBXr/UrnwryGz1CeoIWQk42bJePCedzMQpNxz/K9r3gol2
Eem9Zbn+f5fAaogQiDAXi7aTObqf5LqzN3XdJjmKBq5buGGEt5+HUTkzWpYnvrlL
M2kjc8NnxBb8Nx5BsTlOhUgvT81vCVJL25QFv5tN903Bc4qQG6/DXwqcLGIKszJ4
rZw1n4dm0eWq4lPbUSLC8hKj6aV2yIwA1+nI7hbuky6vmX0rNxSHe/RRQsjUFIoP
0NNDZGuIUGHJQuVeg9xaH6EOGi0xQdfZ/rXFoTaPW7JrDr7C4gAbVQYnGt/wJwvz
cnmix+nS70VfZAW0JD9z4Qax3yyVbosQpYFMEwNfGcWqQ36A6tm0pzDpmb5M5tQp
K90kBpfEUMETeH+vqMdc0c8Rn2mgu/YH/AOXdsyeYAqo/b9iLDztBPXJjWUcoGeB
u68MYTfE+n1RUsycafn4
=qmzC
-----END PGP SIGNATURE-----
Merge tag 'ntb-4.13-bugfixes' of git://github.com/jonmason/ntb
Pull NTB fixes from Jon Mason:
"NTB bug fixes to address an incorrect ntb_mw_count reference in the
NTB transport, improperly bringing down the link if SPADs are
corrupted, and an out-of-order issue regarding link negotiation and
data passing"
* tag 'ntb-4.13-bugfixes' of git://github.com/jonmason/ntb:
ntb: ntb_test: ensure the link is up before trying to configure the mws
ntb: transport shouldn't disable link due to bogus values in SPADs
ntb: use correct mw_count function in ntb_tool and ntb_transport
The "lock_page_killable()" function waits for exclusive access to the
page lock bit using the WQ_FLAG_EXCLUSIVE bit in the waitqueue entry
set.
That means that if it gets woken up, other waiters may have been
skipped.
That, in turn, means that if it sees the page being unlocked, it *must*
take that lock and return success, even if a lethal signal is also
pending.
So instead of checking for lethal signals first, we need to check for
them after we've checked the actual bit that we were waiting for. Even
if that might then delay the killing of the process.
This matches the order of the old "wait_on_bit_lock()" infrastructure
that the page locking used to use (and is still used in a few other
areas).
Note that if we still return an error after having unsuccessfully tried
to acquire the page lock, that is ok: that means that some other thread
was able to get ahead of us and lock the page, and when that other
thread then unlocks the page, the wakeup event will be repeated. So any
other pending waiters will now get properly woken up.
Fixes: 6290602709 ("mm: add PageWaiters indicating tasks are waiting for a page bit")
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Jan Kara <jack@suse.cz>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Tim Chen and Kan Liang have been battling a customer load that shows
extremely long page wakeup lists. The cause seems to be constant NUMA
migration of a hot page that is shared across a lot of threads, but the
actual root cause for the exact behavior has not been found.
Tim has a patch that batches the wait list traversal at wakeup time, so
that we at least don't get long uninterruptible cases where we traverse
and wake up thousands of processes and get nasty latency spikes. That
is likely 4.14 material, but we're still discussing the page waitqueue
specific parts of it.
In the meantime, I've tried to look at making the page wait queues less
expensive, and failing miserably. If you have thousands of threads
waiting for the same page, it will be painful. We'll need to try to
figure out the NUMA balancing issue some day, in addition to avoiding
the excessive spinlock hold times.
That said, having tried to rewrite the page wait queues, I can at least
fix up some of the braindamage in the current situation. In particular:
(a) we don't want to continue walking the page wait list if the bit
we're waiting for already got set again (which seems to be one of
the patterns of the bad load). That makes no progress and just
causes pointless cache pollution chasing the pointers.
(b) we don't want to put the non-locking waiters always on the front of
the queue, and the locking waiters always on the back. Not only is
that unfair, it means that we wake up thousands of reading threads
that will just end up being blocked by the writer later anyway.
Also add a comment about the layout of 'struct wait_page_key' - there is
an external user of it in the cachefiles code that means that it has to
match the layout of 'struct wait_bit_key' in the two first members. It
so happens to match, because 'struct page *' and 'unsigned long *' end
up having the same values simply because the page flags are the first
member in struct page.
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Christopher Lameter <cl@linux.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We have a MAX_LFS_FILESIZE macro that is meant to be filled in by
filesystems (and other IO targets) that know they are 64-bit clean and
don't have any 32-bit limits in their IO path.
It turns out that our 32-bit value for that limit was bogus. On 32-bit,
the VM layer is limited by the page cache to only 32-bit index values,
but our logic for that was confusing and actually wrong. We used to
define that value to
(((loff_t)PAGE_SIZE << (BITS_PER_LONG-1))-1)
which is actually odd in several ways: it limits the index to 31 bits,
and then it limits files so that they can't have data in that last byte
of a page that has the highest 31-bit index (ie page index 0x7fffffff).
Neither of those limitations make sense. The index is actually the full
32 bit unsigned value, and we can use that whole full page. So the
maximum size of the file would logically be "PAGE_SIZE << BITS_PER_LONG".
However, we do wan tto avoid the maximum index, because we have code
that iterates over the page indexes, and we don't want that code to
overflow. So the maximum size of a file on a 32-bit host should
actually be one page less than the full 32-bit index.
So the actual limit is ULONG_MAX << PAGE_SHIFT. That means that we will
not actually be using the page of that last index (ULONG_MAX), but we
can grow a file up to that limit.
The wrong value of MAX_LFS_FILESIZE actually caused problems for Doug
Nazar, who was still using a 32-bit host, but with a 9.7TB 2 x RAID5
volume. It turns out that our old MAX_LFS_FILESIZE was 8TiB (well, one
byte less), but the actual true VM limit is one page less than 16TiB.
This was invisible until commit c2a9737f45 ("vfs,mm: fix a dead loop
in truncate_inode_pages_range()"), which started applying that
MAX_LFS_FILESIZE limit to block devices too.
NOTE! On 64-bit, the page index isn't a limiter at all, and the limit is
actually just the offset type itself (loff_t), which is signed. But for
clarity, on 64-bit, just use the maximum signed value, and don't make
people have to count the number of 'f' characters in the hex constant.
So just use LLONG_MAX for the 64-bit case. That was what the value had
been before too, just written out as a hex constant.
Fixes: c2a9737f45 ("vfs,mm: fix a dead loop in truncate_inode_pages_range()")
Reported-and-tested-by: Doug Nazar <nazard@nazar.ca>
Cc: Andreas Dilger <adilger@dilger.ca>
Cc: Mark Fasheh <mfasheh@versity.com>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Dave Kleikamp <shaggy@kernel.org>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull input fixes from Dmitry Torokhov:
- a tweak to the IBM Trackpoint driver that helps recognizing
trackpoints on never Lenovo Carbons
- a fix to the ALPS driver solving scroll issues on some Dells
- yet another ACPI ID has been added to Elan I2C toucpad driver
- quieted diagnostic message in soc_button_array driver
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
Input: ALPS - fix two-finger scroll breakage in right side on ALPS touchpad
Input: soc_button_array - silence -ENOENT error on Dell XPS13 9365
Input: trackpoint - add new trackpoint firmware ID
Input: elan_i2c - add ELAN0602 ACPI ID to support Lenovo Yoga310
Pull x86 fixes from Ingo Molnar:
"Two fixes: one for an ldt_struct handling bug and a cherry-picked
objtool fix"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mm: Fix use-after-free of ldt_struct
objtool: Fix '-mtune=atom' decoding support in objtool 2.0
Pull timer fix from Ingo Molnar:
"Fix a timer granularity handling race+bug, which would manifest itself
by spuriously increasing timeouts of some timers (from 1 jiffy to ~500
jiffies in the worst case measured) in certain nohz states"
* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
timers: Fix excessive granularity of new timers after a nohz idle
Pull perf fix from Ingo Molnar:
"A single fix to not allow nonsensical event groups that result in
kernel warnings"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/core: Fix group {cpu,task} validation
Merge misc fixes from Andrew Morton:
"6 fixes"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
mm/memblock.c: reversed logic in memblock_discard()
fork: fix incorrect fput of ->exe_file causing use-after-free
mm/madvise.c: fix freeing of locked page with MADV_FREE
dax: fix deadlock due to misaligned PMD faults
mm, shmem: fix handling /sys/kernel/mm/transparent_hugepage/shmem_enabled
PM/hibernate: touch NMI watchdog when creating snapshot
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJZoEmaAAoJEL/70l94x66DmnMH/17uzxBe3UksLBKWC5grWhRq
GVlHVI+XH7jPub1hfqKkj09nnJ0OJAiO87vX9A/CCobtxLDk0UB02U2qv+jbFbmN
mSkAovY8Rn4YR73SqU+XTYajnnwmYsEiPuHVUDbMaKY3yBLW/BYtSqCuAHSm3NrS
UQO8DvQAY7+W7/gA9QY7aaK/sc8N6oAwE4DHsxTYKR70Eax4SjjMLWYQY7oSutTx
U8XpguF5CwP8iYbsF++WkNYxe85piheWIpUIKg+3pYxKgpDNBST8ROmxmuvSdAh6
1hkXy2qxpw+YYM6JkHRb7kBpuUAGqzYNrEF/c2Wfor+gufsyoq8LQSq5pB+d/5I=
=M40T
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull Paolo Bonzini:
"Bugfixes for x86, PPC and s390"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: PPC: Book3S: Fix race and leak in kvm_vm_ioctl_create_spapr_tce()
KVM, pkeys: do not use PKRU value in vcpu->arch.guest_fpu.state
KVM: x86: simplify handling of PKRU
KVM: x86: block guest protection keys unless the host has them enabled
KVM: PPC: Book3S HV: Add missing barriers to XIVE code and document them
KVM: PPC: Book3S HV: Workaround POWER9 DD1.0 bug causing IPB bit loss
KVM: PPC: Book3S HV: Use msgsync with hypervisor doorbells on POWER9
KVM: s390: sthyi: fix specification exception detection
KVM: s390: sthyi: fix sthyi inline assembly
Fixes two obvious bugs in virtio pci.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJZoG+uAAoJECgfDbjSjVRpvAMIAIoONNPd53SPKDVuyU1ycz7H
hRVJ9dgVqsCyJV7UQNXznTkk1Te+todM3eBOnnWGxBUPyyjjn+nRJY8ObzvPZNtr
GZjBHhuCeWAi1HPcGk3VKFCXB9yzVc7x91YoSZRWRveB1hOoqWCNccuXMlOf1mLC
AAYMdBR7JH9CTA5v73z0n4XmfDPFja9g5qhv3JxYypzS3IrWglsVV8RFFG94zJys
qsg3Ys6SdYnC4whdtT0sdj6zcVV3STqLtutUcWzpBJiPwL+TYprOtGxhjhjG/YdP
vurTYmMk1FZyTlxflfzH0yIRQVZyxARcPGrchhvFv9eE4qN0y4E72FkN8UyyKpU=
=qTWW
-----END PGP SIGNATURE-----
Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
Pull virtio fixes from Michael Tsirkin:
"Fixes two obvious bugs in virtio pci"
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
virtio_pci: fix cpu affinity support
virtio_blk: fix incorrect message when disk is resized
Just one fix, to add a barrier in the switch_mm() code to make sure the mm
cpumask update is ordered vs the MMU starting to load translations. As far as we
know no one's actually hit the bug, but that's just luck.
Thanks to:
Benjamin Herrenschmidt, Nicholas Piggin.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZoAZDAAoJEFHr6jzI4aWAi3AQAJq4boEBqdmL042oNK4PWW0M
uGfehNmtzCw9Hp8bPfzOf8NypJ51Kw7eDQELaeSaazKW+gffUCBeEsKGS7kmHvc+
x1tHxkXxI7PXuNIRojJg9y7rlKXdRym5SecvPSo1cm/c46RRWOlNGZaIwiHyrXSh
eBjyP5EHu1HXpRxkcUh+//PQp2b+7SmgUYzSf0hA9UCtzSZSJr19DuY8uhetI9Ws
AfjkO1uvb2KETqBVegGBpAruZzQtxqdtffd2HToSaCHUnAKma2iqUZqkqBNjL6OQ
gSXWpXVInng/7ktrrfEgSiwlHns7pgHkxYHS8thDZqQpIt3GNsUg2UwpHGf6oL7V
L+GtRp36LM91Ueq6KdlU7bJkmoiJ798Hnp3FOjpkqo+j/MGuCQDDDK4Ge1popehJ
a17K7lE/FKGqNaFINc1Q6hnXg4MPyawAOLDlV839Ap5+ISPS6WcHaa1AgKjdQNkH
fIkZZsYT531FIf853AjUGFw8frSlVfrHmIx9/HJOhEa1KHQhBqGRV1sWYEjuN6IB
av+tQDlleG5aT641qhHlA/hN5DGrGZXLp8e6cFRufF+CSsRayL27u0Qw9pP9VZ3S
bgfdnmZZyP23+bzaq/m/bjhRiOf0snSQPxIKe56KmNCJ8buTrGWDw4IuiPKB7Y6V
06vBFn7ZUP5aeHIZkS62
=IClj
-----END PGP SIGNATURE-----
Merge tag 'powerpc-4.13-8' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fix from Michael Ellerman:
"Just one fix, to add a barrier in the switch_mm() code to make sure
the mm cpumask update is ordered vs the MMU starting to load
translations. As far as we know no one's actually hit the bug, but
that's just luck.
Thanks to Benjamin Herrenschmidt, Nicholas Piggin"
* tag 'powerpc-4.13-8' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/mm: Ensure cpumask update is ordered
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQGcBAABAgAGBQJZn2HQAAoJEIosvXAHck9RXMAL/iVeR4DjmXLwGQtOIQUzj0pv
0JRubkh8/ud5VvfznjDvy0bBl/jodCK6N2wU7iqBhJUYW5Tc/TLaRt6MZ2KT4pLo
PrD64hdjEtxkU5si+LOVLU11KndEIIQUV5+Mh9Zqj51DTHsyXJHPi/98HjNJm5Gq
pXfUk+4eq229Pqq1JuPtfPaNHH/fZCODLf82vDQZedlaZhzHgXtDg6iQM0SalNhg
iQSAWvmFr5lHlMs5/QMkhurvSaS38GXd+npWUGlJmFymlQbpqzpPGdYMgjnzLxDC
Jw/Uowzo136CWSkSQV2DudKveNfIrVDYGgb97NgtZxsXYlBuJu4rCJvpLOsm6zap
ZRnSReRvEIr6/TvMJ2wnRioz0JkbpPz8gMg7EUzfaexZtuAHXx6bguf2RjrnLJiH
jhV+U+1uwTOgJejbvju/KVV6AP9kECyE5tZjuDF8FenfWkboqAYNaxxWVAfZreF5
wMF0FeJWoGUxwYgRvd8neG1VWB5LQO8rNaQmYNBi7w==
=MlGX
-----END PGP SIGNATURE-----
Merge tag 'cifs-fixes-for-4.13-rc6-and-stable' of git://git.samba.org/sfrench/cifs-2.6
Pull cifs fixes from Steve French:
"Some bug fixes for stable for cifs"
* tag 'cifs-fixes-for-4.13-rc6-and-stable' of git://git.samba.org/sfrench/cifs-2.6:
cifs: return ENAMETOOLONG for overlong names in cifs_open()/cifs_lookup()
cifs: Fix df output for users with quota limits
Two fixes - one for a 4.13 regression, and the other for an older one:
* Atmel NAND: since we started utilizing ONFI timings, we found that we
were being too restrict at rejecting them, partly due to discrepancies
in ONFI 4.0 and earlier versions. Relax the restriction to keep these
platforms booting. This is a 4.13-rc1 regression.
* nandsim: repeated probe/removal may not work after a failed init,
because we didn't free up our debugfs files properly on the failure
path. This has been around since 3.8, but it's nice to get this fixed
now in a nice easy patch that can target -stable, since there's
already refactoring work (that also fixes the issue) targeted for the
next merge window
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZoJsiAAoJEFySrpd9RFgtGdUP/3egWJlXkx7/XNZNVHXX4/lx
xDv7T3+VXIypU17zyMMkGOzYmzGoUOu9kppz5J0xbw1CwpZskaLnx0x85RN/wRt8
ZVY5Qai/G8YyqJCAD9/pToNjr83EgsfzSq/iLO/L/O2NzuqVWj+dTk3gT5ALFwmH
ME5lHPdVAp1r4EOLGQUGmuzMZRi8p+IJYtIU2kKXkNM112auhcF+dDv4Jh4W9ex0
Pc5an+JOTu22x2H6zsL+epgrpojGSqc6M6bSvNJNyMovcfXL26TcrZm6hn5yO4pF
9kE92jWS0CXR3pCPq4CpVWKbGMzm7HKOkvhAE2/v+wpBzs9GpkQtLamo3Xu9ZQbA
mHUo9oYjzPfOTuXDCUi31MEwlW515PxUa6IYzSNFC5pdR2GU9DvVd23H/gs5A744
+l+e2A0+/09UVyHOrii4ujH8fodm1s6MlHXN8Y+8RPhso7yFd83RfI6LQLjwPTxn
RtmCVVW3EBnk1z+X2H64YE62MlqXWDZ+8SYQrEaSHiEUaQz1osxa+TkslUFeVYCS
yp3F8bsftmcxYIsjQJBv5tE2lEmWjpBYnWOPG/fwXVn3NY3cIJ5qoKcpi4TzFDvW
pQ2k1ksQ0gTuNdDm8EDefeTA0BGHTMGKe60zhFjVCUmk6EOSxJ7XPYiXdCzWGxc7
OTTEukeDhXlPod+GLioK
=yDqO
-----END PGP SIGNATURE-----
Merge tag 'for-linus-20170825' of git://git.infradead.org/linux-mtd
Pull MTD fixes from Brian Norris:
"Two fixes - one for a 4.13 regression, and the other for an older one:
- Atmel NAND: since we started utilizing ONFI timings, we found that
we were being too restrict at rejecting them, partly due to
discrepancies in ONFI 4.0 and earlier versions. Relax the
restriction to keep these platforms booting. This is a 4.13-rc1
regression.
- nandsim: repeated probe/removal may not work after a failed init,
because we didn't free up our debugfs files properly on the failure
path. This has been around since 3.8, but it's nice to get this
fixed now in a nice easy patch that can target -stable, since
there's already refactoring work (that also fixes the issue)
targeted for the next merge window"
* tag 'for-linus-20170825' of git://git.infradead.org/linux-mtd:
mtd: nand: atmel: Relax tADL_min constraint
mtd: nandsim: remove debugfs entries in error path
Pull block fixes from Jens Axboe:
"A small batch of fixes that should be included for the 4.13 release.
This contains:
- Revert of the 4k loop blocksize support. Even with a recent batch
of 4 fixes, we're still not really happy with it. Rather than be
stuck with an API issue, let's revert it and get it right for 4.14.
- Trivial patch from Bart, adding a few flags to the blk-mq debugfs
exports that were added in this release, but not to the debugfs
parts.
- Regression fix for bsg, fixing a potential kernel panic. From
Benjamin.
- Tweak for the blk throttling, improving how we account discards.
From Shaohua"
* 'for-linus' of git://git.kernel.dk/linux-block:
blk-mq-debugfs: Add names for recently added flags
bsg-lib: fix kernel panic resulting from missing allocation of reply-buffer
Revert "loop: support 4k physical blocksize"
blk-throttle: cap discard request size
Pull i2c fixes from Wolfram Sang:
"I2C has some bugfixes for you: mainly Jarkko fixed up a few things in
the designware driver regarding the new slave mode. But Ulf also fixed
a long-standing and now agreed suspend problem. Plus, some simple
stuff which nonetheless needs fixing"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: designware: Fix runtime PM for I2C slave mode
i2c: designware: Remove needless pm_runtime_put_noidle() call
i2c: aspeed: fixed potential null pointer dereference
i2c: simtec: use release_mem_region instead of release_resource
i2c: core: Make comment about I2C table requirement to reflect the code
i2c: designware: Fix standard mode speed when configuring the slave mode
i2c: designware: Fix oops from i2c_dw_irq_handler_slave
i2c: designware: Fix system suspend
irq_create_affinity_masks() can return NULL on non-SMP systems, when there
are not enough "free" vectors available to spread, or if memory allocation
for the CPU masks fails. Only the allocation failure is of interest, and
even then the system will work just fine except for non-optimally spread
vectors. Thus remove the warnings.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: David S. Miller <davem@davemloft.net>
We're keeping in a good shape, this batch contains just a few small
fixes (a regression fix for ASoC rt5677 codec, NULL dereference and
error-path fixes in firewire, and a corner-case ioctl error fix for
user TLV), as well as usual quirks for USB-audio and HD-audio.
-----BEGIN PGP SIGNATURE-----
iQJCBAABCAAsFiEECxfAB4MH3rD5mfB6bDGAVD0pKaQFAlmf4wAOHHRpd2FpQHN1
c2UuZGUACgkQbDGAVD0pKaTafhAAks6blpz37fjodZVl/0LSb84jv/oIRtuednSD
9wdZyvL/mDHe7c5u88+/k5A/MNIi49TQO0MlMTmRm+ZB5gEtCFn2fX8dtNvMwfEC
Vxmt9A9k8AnQVOVB8QR4A+v4/TB22MACdYECs9T5PYv6DchOUtiM+aGrgYuSu3Hi
kkjThAvi8JheuaSGCjRLB0ztEXUSE8Y8LouINiYNBM1cIxxYFkwa/Dp1erCjMqf3
bImUKC5rJSq7ex8dVMJ30tfP+0hLPyntx1SotUPDn0POKmw/VBZasOdEvIrk/rDV
sk2HoNTiV8APKYgwHO+SwVFufjr3ioKAh4q+xtE1z7iqZbRqE8AOnse6p36zD7Lh
gx7DbSYG5M3UlYoQByWO/l6HNq7Ei2gC3zmbXpW68JinLvxt3qGhiHQhZJ8q+GuD
NGO8IZbO0e4VK3uvW2fFh27xlceTSZbmfC9yQ8IFncd/tUly+ZYaD7U7t360X4Ap
fyI2/oCckM1OGG51VlO5sympvR5dh4+L5h4m3+H3ZoTUjFeiS2rdjAokjxsRmQdo
rrSGPlNYn1LQ0s+IC+NuDuTLih9BhQN3IGM87UVbxRInjhb/ftcOhaRgu4hrJ+lw
npR2CfMVdfvgkeJ3wQ5pThsN1hOmoL/7ytF4OdV+YdlQSrtRxOW47eqTWdRpPXz8
YTomDyo=
=Sqtt
-----END PGP SIGNATURE-----
Merge tag 'sound-4.13-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound fixes from Takashi Iwai:
"We're keeping in a good shape, this batch contains just a few small
fixes (a regression fix for ASoC rt5677 codec, NULL dereference and
error-path fixes in firewire, and a corner-case ioctl error fix for
user TLV), as well as usual quirks for USB-audio and HD-audio"
* tag 'sound-4.13-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
ASoC: rt5677: Reintroduce I2C device IDs
ALSA: hda - Add stereo mic quirk for Lenovo G50-70 (17aa:3978)
ALSA: core: Fix unexpected error at replacing user TLV
ALSA: usb-audio: Add delay quirk for H650e/Jabra 550a USB headsets
ALSA: firewire-motu: destroy stream data surely at failure of card initialization
ALSA: firewire: fix NULL pointer dereference when releasing uninitialized data of iso-resource
A single fix for tegra210-adma driver to check of_irq_get() error
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJZn8KPAAoJEHwUBw8lI4NHGtEP/RkZxCfvQZGls6cTy8FMu/5b
OevPRhA1n36p9HSlfwjdRvZdDk0iZryC1Ikd1sQoJ9GF54LqMdwLY/wzjn+vm4WA
PpZreF+R35XT3ZkdFyZDKSHyD/EvBLu0hzRIX3el30UAduAqu2iB7GJYSmlpGgYu
sjKL5ozgq3+9nO939BRyH0UR35YaXHbIsk+WX9qAs3Ute+rax5An+t0rIvLx/ooO
V5OOkejUa1C/hI6/2BfIXNr1YGvkOpEQiFRGjpJU09hcI321kSNP8QoHZ6ZedFTi
FEH0HOKZjxwUnYvGBTF0CgKOrqFCIQ1qxsnI/fCY+qsccKDndEMskGakxR54OqXw
F6PGD8SFR/bdina5Y0JbN9pn4qdVYhoX2fAbrTwvhtnVRnyzfytIEIR/Q+0W0KuA
trkmLKtJ2093LSpCnBWSlxtzxnyyCb8QzoSfawMV/wbFaP6Yp6SRlpaPx7RqQjl8
LzbS1ERTj4hSbo/YPE1KyfLGSkS27A12wTeSxes4LsLOJ1gVpnSPBmKH4JuuuWyz
r1dkXiA3KDfhaiIgqoX3NWh72be1PFipbt+BYK3cDKUfVU9eDfhyYV79nXFf/Asc
VzopHe7My0W3CT6atWjMPCsDWVDzGsWXTvhU+seAL66XwXz+8U0Xpe859dNvGfCl
0zpopoDYINyiTOME88jJ
=ccQ+
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-4.13-rc7' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fix from Vinod Koul:
"A single fix for tegra210-adma driver to check of_irq_get() error"
* tag 'dmaengine-fix-4.13-rc7' of git://git.infradead.org/users/vkoul/slave-dma:
dmaengine: tegra210-adma: fix of_irq_get() error check
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJZn2KYAAoJEAx081l5xIa+1aAP/R6iIVHZrAlPStE5TkYs8jdU
Ex5JnorP+6JQJcKb0vlcx8FpBb1PTp7a1A9ulAfWENgtXjRJ/hSTrQrVJ0Fa5xgu
kzZ4YzfP5jwAZ2yQgedfZkGUMknGJ0gO6htBmAwmGrnU2A9PQfrfB/C3Bs8Vg+po
xDxwGFmUK/L/SEuhZJDvNZxkQEJMUHW/OV89EcAjahU83F82lCqphPIKImI5GRaU
DJ9y0grh6eIJYHeed735oGkMku+lPNQ4Q46F0aeswbkxm7dUEGVo0Tg/wpLY1ds9
il/7q2jVfTuZQzu+KGIq8BjzWun4GALM/XOHrN36Os391dlUkXNYF/Hrh90p3QE5
WD9kIoAKGx6WiNeSxUgmbD/WdmOmxXftrrE5JnpKoYU5mfLCdY4W695SaCVLgwoI
Rz5YLQub2G8Ydc7+Zp85vLMXjzRYs07rjw+409U7aZ8y0mJfq4Wen2cMhP7b0LOJ
EKpG7UPzclO3ewPPc1gr2UMcodlJV9Pb1Q6w3iF4a2iR7KwWfhg9PqxEIWTNYz72
uv38BnJHK53mpgZkjGRT39zbJcyyBU2mIBc9s/sy0bMwIgJsbvkZS8UwBrfHBE1l
Ti0O26X74PqKgIYerJMynlEp+KDem3gAkR/CQaVkOl+WjHVqa2m5L0NDQ2o9Npaj
VIzONGmW6jSbsTk2Iq4M
=kkM0
-----END PGP SIGNATURE-----
Merge tag 'drm-fixes-for-v4.13-rc7' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"Fixes for rc7, nothing too crazy, some core, i915, and sunxi fixes,
Intel CI has been responsible for some of these fixes being required"
* tag 'drm-fixes-for-v4.13-rc7' of git://people.freedesktop.org/~airlied/linux:
drm/i915/gvt: Fix the kernel null pointer error
drm: Release driver tracking before making the object available again
drm/i915: Clear lost context-switch interrupts across reset
drm/i915/bxt: use NULL for GPIO connection ID
drm/i915/cnl: Fix LSPCON support.
drm/i915/vbt: ignore extraneous child devices for a port
drm/i915: Initialize 'data' in intel_dsi_dcs_backlight.c
drm/atomic: If the atomic check fails, return its value first
drm/atomic: Handle -EDEADLK with out-fences correctly
drm: Fix framebuffer leak
drm/imx: ipuv3-plane: fix YUV framebuffer scanout on the base plane
gpu: ipu-v3: add DRM dependency
drm/rockchip: Fix suspend crash when drm is not bound
drm/sun4i: Implement drm_driver lastclose to restore fbdev console
Commit 7c05126793 ("mm, fork: make dup_mmap wait for mmap_sem for
write killable") made it possible to kill a forking task while it is
waiting to acquire its ->mmap_sem for write, in dup_mmap().
However, it was overlooked that this introduced an new error path before
a reference is taken on the mm_struct's ->exe_file. Since the
->exe_file of the new mm_struct was already set to the old ->exe_file by
the memcpy() in dup_mm(), it was possible for the mmput() in the error
path of dup_mm() to drop a reference to ->exe_file which was never
taken.
This caused the struct file to later be freed prematurely.
Fix it by updating mm_init() to NULL out the ->exe_file, in the same
place it clears other things like the list of mmaps.
This bug was found by syzkaller. It can be reproduced using the
following C program:
#define _GNU_SOURCE
#include <pthread.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/syscall.h>
#include <sys/wait.h>
#include <unistd.h>
static void *mmap_thread(void *_arg)
{
for (;;) {
mmap(NULL, 0x1000000, PROT_READ,
MAP_POPULATE|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
}
}
static void *fork_thread(void *_arg)
{
usleep(rand() % 10000);
fork();
}
int main(void)
{
fork();
fork();
fork();
for (;;) {
if (fork() == 0) {
pthread_t t;
pthread_create(&t, NULL, mmap_thread, NULL);
pthread_create(&t, NULL, fork_thread, NULL);
usleep(rand() % 10000);
syscall(__NR_exit_group, 0);
}
wait(NULL);
}
}
No special kernel config options are needed. It usually causes a NULL
pointer dereference in __remove_shared_vm_struct() during exit, or in
dup_mmap() (which is usually inlined into copy_process()) during fork.
Both are due to a vm_area_struct's ->vm_file being used after it's
already been freed.
Google Bug Id: 64772007
Link: http://lkml.kernel.org/r/20170823211408.31198-1-ebiggers3@gmail.com
Fixes: 7c05126793 ("mm, fork: make dup_mmap wait for mmap_sem for write killable")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org> [v4.7+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In DAX there are two separate places where the 2MiB range of a PMD is
defined.
The first is in the page tables, where a PMD mapping inserted for a
given address spans from (vmf->address & PMD_MASK) to ((vmf->address &
PMD_MASK) + PMD_SIZE - 1). That is, from the 2MiB boundary below the
address to the 2MiB boundary above the address.
So, for example, a fault at address 3MiB (0x30 0000) falls within the
PMD that ranges from 2MiB (0x20 0000) to 4MiB (0x40 0000).
The second PMD range is in the mapping->page_tree, where a given file
offset is covered by a radix tree entry that spans from one 2MiB aligned
file offset to another 2MiB aligned file offset.
So, for example, the file offset for 3MiB (pgoff 768) falls within the
PMD range for the order 9 radix tree entry that ranges from 2MiB (pgoff
512) to 4MiB (pgoff 1024).
This system works so long as the addresses and file offsets for a given
mapping both have the same offsets relative to the start of each PMD.
Consider the case where the starting address for a given file isn't 2MiB
aligned - say our faulting address is 3 MiB (0x30 0000), but that
corresponds to the beginning of our file (pgoff 0). Now all the PMDs in
the mapping are misaligned so that the 2MiB range defined in the page
tables never matches up with the 2MiB range defined in the radix tree.
The current code notices this case for DAX faults to storage with the
following test in dax_pmd_insert_mapping():
if (pfn_t_to_pfn(pfn) & PG_PMD_COLOUR)
goto unlock_fallback;
This test makes sure that the pfn we get from the driver is 2MiB
aligned, and relies on the assumption that the 2MiB alignment of the pfn
we get back from the driver matches the 2MiB alignment of the faulting
address.
However, faults to holes were not checked and we could hit the problem
described above.
This was reported in response to the NVML nvml/src/test/pmempool_sync
TEST5:
$ cd nvml/src/test/pmempool_sync
$ make TEST5
You can grab NVML here:
https://github.com/pmem/nvml/
The dmesg warning you see when you hit this error is:
WARNING: CPU: 13 PID: 2900 at fs/dax.c:641 dax_insert_mapping_entry+0x2df/0x310
Where we notice in dax_insert_mapping_entry() that the radix tree entry
we are about to replace doesn't match the locked entry that we had
previously inserted into the tree. This happens because the initial
insertion was done in grab_mapping_entry() using a pgoff calculated from
the faulting address (vmf->address), and the replacement in
dax_pmd_load_hole() => dax_insert_mapping_entry() is done using
vmf->pgoff.
In our failure case those two page offsets (one calculated from
vmf->address, one using vmf->pgoff) point to different order 9 radix
tree entries.
This failure case can result in a deadlock because the radix tree unlock
also happens on the pgoff calculated from vmf->address. This means that
the locked radix tree entry that we swapped in to the tree in
dax_insert_mapping_entry() using vmf->pgoff is never unlocked, so all
future faults to that 2MiB range will block forever.
Fix this by validating that the faulting address's PMD offset matches
the PMD offset from the start of the file. This check is done at the
very beginning of the fault and covers faults that would have mapped to
storage as well as faults to holes. I left the COLOUR check in
dax_pmd_insert_mapping() in place in case we ever hit the insanity
condition where the alignment of the pfn we get from the driver doesn't
match the alignment of the userspace address.
Link: http://lkml.kernel.org/r/20170822222436.18926-1-ross.zwisler@linux.intel.com
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Reported-by: "Slusarz, Marcin" <marcin.slusarz@intel.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Matthew Wilcox <mawilcox@microsoft.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
/sys/kernel/mm/transparent_hugepage/shmem_enabled controls if we want
to allocate huge pages when allocate pages for private in-kernel shmem
mount.
Unfortunately, as Dan noticed, I've screwed it up and the only way to
make kernel allocate huge page for the mount is to use "force" there.
All other values will be effectively ignored.
Link: http://lkml.kernel.org/r/20170822144254.66431-1-kirill.shutemov@linux.intel.com
Fixes: 5a6e75f811 ("shmem: prepare huge= mount option and sysfs knob")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: stable <stable@vger.kernel.org> [4.8+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
There is a problem that when counting the pages for creating the
hibernation snapshot will take significant amount of time, especially on
system with large memory. Since the counting job is performed with irq
disabled, this might lead to NMI lockup. The following warning were
found on a system with 1.5TB DRAM:
Freezing user space processes ... (elapsed 0.002 seconds) done.
OOM killer disabled.
PM: Preallocating image memory...
NMI watchdog: Watchdog detected hard LOCKUP on cpu 27
CPU: 27 PID: 3128 Comm: systemd-sleep Not tainted 4.13.0-0.rc2.git0.1.fc27.x86_64 #1
task: ffff9f01971ac000 task.stack: ffffb1a3f325c000
RIP: 0010:memory_bm_find_bit+0xf4/0x100
Call Trace:
swsusp_set_page_free+0x2b/0x30
mark_free_pages+0x147/0x1c0
count_data_pages+0x41/0xa0
hibernate_preallocate_memory+0x80/0x450
hibernation_snapshot+0x58/0x410
hibernate+0x17c/0x310
state_store+0xdf/0xf0
kobj_attr_store+0xf/0x20
sysfs_kf_write+0x37/0x40
kernfs_fop_write+0x11c/0x1a0
__vfs_write+0x37/0x170
vfs_write+0xb1/0x1a0
SyS_write+0x55/0xc0
entry_SYSCALL_64_fastpath+0x1a/0xa5
...
done (allocated 6590003 pages)
PM: Allocated 26360012 kbytes in 19.89 seconds (1325.28 MB/s)
It has taken nearly 20 seconds(2.10GHz CPU) thus the NMI lockup was
triggered. In case the timeout of the NMI watch dog has been set to 1
second, a safe interval should be 6590003/20 = 320k pages in theory.
However there might also be some platforms running at a lower frequency,
so feed the watchdog every 100k pages.
[yu.c.chen@intel.com: simplification]
Link: http://lkml.kernel.org/r/1503460079-29721-1-git-send-email-yu.c.chen@intel.com
[yu.c.chen@intel.com: use interval of 128k instead of 100k to avoid modulus]
Link: http://lkml.kernel.org/r/1503328098-5120-1-git-send-email-yu.c.chen@intel.com
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Reported-by: Jan Filipcewicz <jan.filipcewicz@intel.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Michal Hocko <mhocko@suse.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Len Brown <lenb@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit 0b0f9dc5 ("Revert "virtio_pci: use shared interrupts for
virtqueues"") removed the adjustment of the pre_vectors for the virtio
MSI-X vector allocation which was added in commit fb5e31d9 ("virtio:
allow drivers to request IRQ affinity when creating VQs"). This will
lead to an incorrect assignment of MSI-X vectors, and potential
deadlocks when offlining cpus.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Fixes: 0b0f9dc5 ("Revert "virtio_pci: use shared interrupts for virtqueues")
Reported-by: YASUAKI ISHIMATSU <yasu.isimatu@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The message printed on disk resize is incorrect. The following is
printed when resizing to 2 GiB:
$ truncate -s 1G test.img
$ qemu -device virtio-blk-pci,logical_block_size=4096,...
(qemu) block_resize drive1 2G
virtio_blk virtio0: new size: 4194304 4096-byte logical blocks (17.2 GB/16.0 GiB)
The virtio_blk capacity config field is in 512-byte sector units
regardless of logical_block_size as per the VIRTIO specification.
Therefore the message should read:
virtio_blk virtio0: new size: 524288 4096-byte logical blocks (2.15 GB/2.0 GiB)
Note that this only affects the printed message. Thankfully the actual
block device has the correct size because the block layer expects
capacity in sectors.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The symbolic constants QUEUE_FLAG_SCSI_PASSTHROUGH, QUEUE_FLAG_QUIESCED
and REQ_NOWAIT are missing from blk-mq-debugfs.c. Add these to
blk-mq-debugfs.c such that these appear as names in debugfs instead of
as numbers.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nixiaoming pointed out that there is a memory leak in
kvm_vm_ioctl_create_spapr_tce() if the call to anon_inode_getfd()
fails; the memory allocated for the kvmppc_spapr_tce_table struct
is not freed, and nor are the pages allocated for the iommu
tables. In addition, we have already incremented the process's
count of locked memory pages, and this doesn't get restored on
error.
David Hildenbrand pointed out that there is a race in that the
function checks early on that there is not already an entry in the
stt->iommu_tables list with the same LIOBN, but an entry with the
same LIOBN could get added between then and when the new entry is
added to the list.
This fixes all three problems. To simplify things, we now call
anon_inode_getfd() before placing the new entry in the list. The
check for an existing entry is done while holding the kvm->lock
mutex, immediately before adding the new entry to the list.
Finally, on failure we now call kvmppc_account_memlimit to
decrement the process's count of locked memory pages.
Reported-by: Nixiaoming <nixiaoming@huawei.com>
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Regardless of which events form a group, it does not make sense for the
events to target different tasks and/or CPUs, as this leaves the group
inconsistent and impossible to schedule. The core perf code assumes that
these are consistent across (successfully intialised) groups.
Core perf code only verifies this when moving SW events into a HW
context. Thus, we can violate this requirement for pure SW groups and
pure HW groups, unless the relevant PMU driver happens to perform this
verification itself. These mismatched groups subsequently wreak havoc
elsewhere.
For example, we handle watchpoints as SW events, and reserve watchpoint
HW on a per-CPU basis at pmu::event_init() time to ensure that any event
that is initialised is guaranteed to have a slot at pmu::add() time.
However, the core code only checks the group leader's cpu filter (via
event_filter_match()), and can thus install follower events onto CPUs
violating thier (mismatched) CPU filters, potentially installing them
into a CPU without sufficient reserved slots.
This can be triggered with the below test case, resulting in warnings
from arch backends.
#define _GNU_SOURCE
#include <linux/hw_breakpoint.h>
#include <linux/perf_event.h>
#include <sched.h>
#include <stdio.h>
#include <sys/prctl.h>
#include <sys/syscall.h>
#include <unistd.h>
static int perf_event_open(struct perf_event_attr *attr, pid_t pid, int cpu,
int group_fd, unsigned long flags)
{
return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
}
char watched_char;
struct perf_event_attr wp_attr = {
.type = PERF_TYPE_BREAKPOINT,
.bp_type = HW_BREAKPOINT_RW,
.bp_addr = (unsigned long)&watched_char,
.bp_len = 1,
.size = sizeof(wp_attr),
};
int main(int argc, char *argv[])
{
int leader, ret;
cpu_set_t cpus;
/*
* Force use of CPU0 to ensure our CPU0-bound events get scheduled.
*/
CPU_ZERO(&cpus);
CPU_SET(0, &cpus);
ret = sched_setaffinity(0, sizeof(cpus), &cpus);
if (ret) {
printf("Unable to set cpu affinity\n");
return 1;
}
/* open leader event, bound to this task, CPU0 only */
leader = perf_event_open(&wp_attr, 0, 0, -1, 0);
if (leader < 0) {
printf("Couldn't open leader: %d\n", leader);
return 1;
}
/*
* Open a follower event that is bound to the same task, but a
* different CPU. This means that the group should never be possible to
* schedule.
*/
ret = perf_event_open(&wp_attr, 0, 1, leader, 0);
if (ret < 0) {
printf("Couldn't open mismatched follower: %d\n", ret);
return 1;
} else {
printf("Opened leader/follower with mismastched CPUs\n");
}
/*
* Open as many independent events as we can, all bound to the same
* task, CPU0 only.
*/
do {
ret = perf_event_open(&wp_attr, 0, 0, -1, 0);
} while (ret >= 0);
/*
* Force enable/disble all events to trigger the erronoeous
* installation of the follower event.
*/
printf("Opened all events. Toggling..\n");
for (;;) {
prctl(PR_TASK_PERF_EVENTS_DISABLE, 0, 0, 0, 0);
prctl(PR_TASK_PERF_EVENTS_ENABLE, 0, 0, 0, 0);
}
return 0;
}
Fix this by validating this requirement regardless of whether we're
moving events.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Zhou Chengming <zhouchengming1@huawei.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1498142498-15758-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The following commit:
39a0526fb3 ("x86/mm: Factor out LDT init from context init")
renamed init_new_context() to init_new_context_ldt() and added a new
init_new_context() which calls init_new_context_ldt(). However, the
error code of init_new_context_ldt() was ignored. Consequently, if a
memory allocation in alloc_ldt_struct() failed during a fork(), the
->context.ldt of the new task remained the same as that of the old task
(due to the memcpy() in dup_mm()). ldt_struct's are not intended to be
shared, so a use-after-free occurred after one task exited.
Fix the bug by making init_new_context() pass through the error code of
init_new_context_ldt().
This bug was found by syzkaller, which encountered the following splat:
BUG: KASAN: use-after-free in free_ldt_struct.part.2+0x10a/0x150 arch/x86/kernel/ldt.c:116
Read of size 4 at addr ffff88006d2cb7c8 by task kworker/u9:0/3710
CPU: 1 PID: 3710 Comm: kworker/u9:0 Not tainted 4.13.0-rc4-next-20170811 #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
print_address_description+0x73/0x250 mm/kasan/report.c:252
kasan_report_error mm/kasan/report.c:351 [inline]
kasan_report+0x24e/0x340 mm/kasan/report.c:409
__asan_report_load4_noabort+0x14/0x20 mm/kasan/report.c:429
free_ldt_struct.part.2+0x10a/0x150 arch/x86/kernel/ldt.c:116
free_ldt_struct arch/x86/kernel/ldt.c:173 [inline]
destroy_context_ldt+0x60/0x80 arch/x86/kernel/ldt.c:171
destroy_context arch/x86/include/asm/mmu_context.h:157 [inline]
__mmdrop+0xe9/0x530 kernel/fork.c:889
mmdrop include/linux/sched/mm.h:42 [inline]
exec_mmap fs/exec.c:1061 [inline]
flush_old_exec+0x173c/0x1ff0 fs/exec.c:1291
load_elf_binary+0x81f/0x4ba0 fs/binfmt_elf.c:855
search_binary_handler+0x142/0x6b0 fs/exec.c:1652
exec_binprm fs/exec.c:1694 [inline]
do_execveat_common.isra.33+0x1746/0x22e0 fs/exec.c:1816
do_execve+0x31/0x40 fs/exec.c:1860
call_usermodehelper_exec_async+0x457/0x8f0 kernel/umh.c:100
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431
Allocated by task 3700:
save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
save_stack+0x43/0xd0 mm/kasan/kasan.c:447
set_track mm/kasan/kasan.c:459 [inline]
kasan_kmalloc+0xad/0xe0 mm/kasan/kasan.c:551
kmem_cache_alloc_trace+0x136/0x750 mm/slab.c:3627
kmalloc include/linux/slab.h:493 [inline]
alloc_ldt_struct+0x52/0x140 arch/x86/kernel/ldt.c:67
write_ldt+0x7b7/0xab0 arch/x86/kernel/ldt.c:277
sys_modify_ldt+0x1ef/0x240 arch/x86/kernel/ldt.c:307
entry_SYSCALL_64_fastpath+0x1f/0xbe
Freed by task 3700:
save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
save_stack+0x43/0xd0 mm/kasan/kasan.c:447
set_track mm/kasan/kasan.c:459 [inline]
kasan_slab_free+0x71/0xc0 mm/kasan/kasan.c:524
__cache_free mm/slab.c:3503 [inline]
kfree+0xca/0x250 mm/slab.c:3820
free_ldt_struct.part.2+0xdd/0x150 arch/x86/kernel/ldt.c:121
free_ldt_struct arch/x86/kernel/ldt.c:173 [inline]
destroy_context_ldt+0x60/0x80 arch/x86/kernel/ldt.c:171
destroy_context arch/x86/include/asm/mmu_context.h:157 [inline]
__mmdrop+0xe9/0x530 kernel/fork.c:889
mmdrop include/linux/sched/mm.h:42 [inline]
__mmput kernel/fork.c:916 [inline]
mmput+0x541/0x6e0 kernel/fork.c:927
copy_process.part.36+0x22e1/0x4af0 kernel/fork.c:1931
copy_process kernel/fork.c:1546 [inline]
_do_fork+0x1ef/0xfb0 kernel/fork.c:2025
SYSC_clone kernel/fork.c:2135 [inline]
SyS_clone+0x37/0x50 kernel/fork.c:2129
do_syscall_64+0x26c/0x8c0 arch/x86/entry/common.c:287
return_from_SYSCALL_64+0x0/0x7a
Here is a C reproducer:
#include <asm/ldt.h>
#include <pthread.h>
#include <signal.h>
#include <stdlib.h>
#include <sys/syscall.h>
#include <sys/wait.h>
#include <unistd.h>
static void *fork_thread(void *_arg)
{
fork();
}
int main(void)
{
struct user_desc desc = { .entry_number = 8191 };
syscall(__NR_modify_ldt, 1, &desc, sizeof(desc));
for (;;) {
if (fork() == 0) {
pthread_t t;
srand(getpid());
pthread_create(&t, NULL, fork_thread, NULL);
usleep(rand() % 10000);
syscall(__NR_exit_group, 0);
}
wait(NULL);
}
}
Note: the reproducer takes advantage of the fact that alloc_ldt_struct()
may use vmalloc() to allocate a large ->entries array, and after
commit:
5d17a73a2e ("vmalloc: back off when the current task is killed")
it is possible for userspace to fail a task's vmalloc() by
sending a fatal signal, e.g. via exit_group(). It would be more
difficult to reproduce this bug on kernels without that commit.
This bug only affected kernels with CONFIG_MODIFY_LDT_SYSCALL=y.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: <stable@vger.kernel.org> [v4.6+]
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Fixes: 39a0526fb3 ("x86/mm: Factor out LDT init from context init")
Link: http://lkml.kernel.org/r/20170824175029.76040-1-ebiggers3@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The host pkru is restored right after vcpu exit (commit 1be0e61), so
KVM_GET_XSAVE will return the host PKRU value instead. Fix this by
using the guest PKRU explicitly in fill_xsave and load_xsave. This
part is based on a patch by Junkang Fu.
The host PKRU data may also not match the value in vcpu->arch.guest_fpu.state,
because it could have been changed by userspace since the last time
it was saved, so skip loading it in kvm_load_guest_fpu.
Reported-by: Junkang Fu <junkang.fjk@alibaba-inc.com>
Cc: Yang Zhang <zy107165@alibaba-inc.com>
Fixes: 1be0e61c1f
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>