Having test_pages_isolated failure message as a warning confuses users
into thinking that it is more serious than it really is. In reality, if
called via CMA, allocation will be retried so a single
test_pages_isolated failure does not prevent allocation from succeeding.
Demote the warning message to an info message and reformat it such that
the text "failed" does not appear and instead a less worrying "PFNS
busy" is used.
This message is trivially reproducible on a 10GB x86 machine on 3.16.y
kernels configured with CONFIG_DMA_CMA.
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Unlike SLUB, sometimes, object isn't started at the beginning of the
slab in SLAB. This causes the unalignment problem after slab merging is
supported by commit 12220dea07 ("mm/slab: support slab merge").
Following is the report from Markos that fail to boot on Malta with EVA.
Calibrating delay loop... 19.86 BogoMIPS (lpj=99328)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 4096 (order: 0, 16384 bytes)
Mountpoint-cache hash table entries: 4096 (order: 0, 16384 bytes)
Kernel bug detected[#1]:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.17.0-05639-g12220dea07f1 #1631
task: 1f04f5d8 ti: 1f050000 task.ti: 1f050000
epc : 80141190 alloc_unbound_pwq+0x234/0x304
Not tainted
ra : 80141184 alloc_unbound_pwq+0x228/0x304
Process swapper/0 (pid: 1, threadinfo=1f050000, task=1f04f5d8, tls=00000000)
Call Trace:
alloc_unbound_pwq+0x234/0x304
apply_workqueue_attrs+0x11c/0x294
__alloc_workqueue_key+0x23c/0x470
init_workqueues+0x320/0x400
do_one_initcall+0xe8/0x23c
kernel_init_freeable+0x9c/0x224
kernel_init+0x10/0x100
ret_from_kernel_thread+0x14/0x1c
[ end trace cb88537fdc8fa200 ]
Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
alloc_unbound_pwq() allocates slab object from pool_workqueue. This
kmem_cache requires 256 bytes alignment, but, current merging code
doesn't honor that, and merge it with kmalloc-256. kmalloc-256 requires
only cacheline size alignment so that above failure occurs. However, in
x86, kmalloc-256 is luckily aligned in 256 bytes, so the problem didn't
happen on it.
To fix this problem, this patch introduces alignment mismatch check in
find_mergeable(). This will fix the problem.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Reported-by: Markos Chandras <Markos.Chandras@imgtec.com>
Tested-by: Markos Chandras <Markos.Chandras@imgtec.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Current pageblock isolation logic could isolate each pageblock
individually. This causes freepage accounting problem if freepage with
pageblock order on isolate pageblock is merged with other freepage on
normal pageblock. We can prevent merging by restricting max order of
merging to pageblock order if freepage is on isolate pageblock.
A side-effect of this change is that there could be non-merged buddy
freepage even if finishing pageblock isolation, because undoing
pageblock isolation is just to move freepage from isolate buddy list to
normal buddy list rather than to consider merging. So, the patch also
makes undoing pageblock isolation consider freepage merge. When
un-isolation, freepage with more than pageblock order and it's buddy are
checked. If they are on normal pageblock, instead of just moving, we
isolate the freepage and free it in order to get merged.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Heesub Shin <heesub.shin@samsung.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Ritesh Harjani <ritesh.list@gmail.com>
Cc: Gioh Kim <gioh.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
All the caller of __free_one_page() has similar freepage counting logic,
so we can move it to __free_one_page(). This reduce line of code and
help future maintenance.
This is also preparation step for "mm/page_alloc: restrict max order of
merging on isolated pageblock" which fix the freepage counting problem
on freepage with more than pageblock order.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Heesub Shin <heesub.shin@samsung.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Ritesh Harjani <ritesh.list@gmail.com>
Cc: Gioh Kim <gioh.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In free_pcppages_bulk(), we use cached migratetype of freepage to
determine type of buddy list where freepage will be added. This
information is stored when freepage is added to pcp list, so if
isolation of pageblock of this freepage begins after storing, this
cached information could be stale. In other words, it has original
migratetype rather than MIGRATE_ISOLATE.
There are two problems caused by this stale information.
One is that we can't keep these freepages from being allocated.
Although this pageblock is isolated, freepage will be added to normal
buddy list so that it could be allocated without any restriction. And
the other problem is incorrect freepage accounting. Freepages on
isolate pageblock should not be counted for number of freepage.
Following is the code snippet in free_pcppages_bulk().
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
__free_one_page(page, page_to_pfn(page), zone, 0, mt);
trace_mm_page_pcpu_drain(page, 0, mt);
if (likely(!is_migrate_isolate_page(page))) {
__mod_zone_page_state(zone, NR_FREE_PAGES, 1);
if (is_migrate_cma(mt))
__mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1);
}
As you can see above snippet, current code already handle second
problem, incorrect freepage accounting, by re-fetching pageblock
migratetype through is_migrate_isolate_page(page).
But, because this re-fetched information isn't used for
__free_one_page(), first problem would not be solved. This patch try to
solve this situation to re-fetch pageblock migratetype before
__free_one_page() and to use it for __free_one_page().
In addition to move up position of this re-fetch, this patch use
optimization technique, re-fetching migratetype only if there is isolate
pageblock. Pageblock isolation is rare event, so we can avoid
re-fetching in common case with this optimization.
This patch also correct migratetype of the tracepoint output.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Heesub Shin <heesub.shin@samsung.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Ritesh Harjani <ritesh.list@gmail.com>
Cc: Gioh Kim <gioh.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Before describing bugs itself, I first explain definition of freepage.
1. pages on buddy list are counted as freepage.
2. pages on isolate migratetype buddy list are *not* counted as freepage.
3. pages on cma buddy list are counted as CMA freepage, too.
Now, I describe problems and related patch.
Patch 1: There is race conditions on getting pageblock migratetype that
it results in misplacement of freepages on buddy list, incorrect
freepage count and un-availability of freepage.
Patch 2: Freepages on pcp list could have stale cached information to
determine migratetype of buddy list to go. This causes misplacement of
freepages on buddy list and incorrect freepage count.
Patch 4: Merging between freepages on different migratetype of
pageblocks will cause freepages accouting problem. This patch fixes it.
Without patchset [3], above problem doesn't happens on my CMA allocation
test, because CMA reserved pages aren't used at all. So there is no
chance for above race.
With patchset [3], I did simple CMA allocation test and get below
result:
- Virtual machine, 4 cpus, 1024 MB memory, 256 MB CMA reservation
- run kernel build (make -j16) on background
- 30 times CMA allocation(8MB * 30 = 240MB) attempts in 5 sec interval
- Result: more than 5000 freepage count are missed
With patchset [3] and this patchset, I found that no freepage count are
missed so that I conclude that problems are solved.
On my simple memory offlining test, these problems also occur on that
environment, too.
This patch (of 4):
There are two paths to reach core free function of buddy allocator,
__free_one_page(), one is free_one_page()->__free_one_page() and the
other is free_hot_cold_page()->free_pcppages_bulk()->__free_one_page().
Each paths has race condition causing serious problems. At first, this
patch is focused on first type of freepath. And then, following patch
will solve the problem in second type of freepath.
In the first type of freepath, we got migratetype of freeing page
without holding the zone lock, so it could be racy. There are two cases
of this race.
1. pages are added to isolate buddy list after restoring orignal
migratetype
CPU1 CPU2
get migratetype => return MIGRATE_ISOLATE
call free_one_page() with MIGRATE_ISOLATE
grab the zone lock
unisolate pageblock
release the zone lock
grab the zone lock
call __free_one_page() with MIGRATE_ISOLATE
freepage go into isolate buddy list,
although pageblock is already unisolated
This may cause two problems. One is that we can't use this page anymore
until next isolation attempt of this pageblock, because freepage is on
isolate buddy list. The other is that freepage accouting could be wrong
due to merging between different buddy list. Freepages on isolate buddy
list aren't counted as freepage, but ones on normal buddy list are
counted as freepage. If merge happens, buddy freepage on normal buddy
list is inevitably moved to isolate buddy list without any consideration
of freepage accouting so it could be incorrect.
2. pages are added to normal buddy list while pageblock is isolated.
It is similar with above case.
This also may cause two problems. One is that we can't keep these
freepages from being allocated. Although this pageblock is isolated,
freepage would be added to normal buddy list so that it could be
allocated without any restriction. And the other problem is same as
case 1, that it, incorrect freepage accouting.
This race condition would be prevented by checking migratetype again
with holding the zone lock. Because it is somewhat heavy operation and
it isn't needed in common case, we want to avoid rechecking as much as
possible. So this patch introduce new variable, nr_isolate_pageblock in
struct zone to check if there is isolated pageblock. With this, we can
avoid to re-check migratetype in common case and do it only if there is
isolated pageblock or migratetype is MIGRATE_ISOLATE. This solve above
mentioned problems.
Changes from v3:
Add one more check in free_one_page() that checks whether migratetype is
MIGRATE_ISOLATE or not. Without this, abovementioned case 1 could happens.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Wen Congyang <wency@cn.fujitsu.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Heesub Shin <heesub.shin@samsung.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Ritesh Harjani <ritesh.list@gmail.com>
Cc: Gioh Kim <gioh.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit 7d49d88683 ("mm, compaction: reduce zone checking frequency in
the migration scanner") has a side-effect that changes the iteration
range calculation. Before the change, block_end_pfn is calculated using
start_pfn, but now it blindly adds pageblock_nr_pages to the previous
value.
This causes the problem that isolation_start_pfn is larger than
block_end_pfn when we isolate the page with more than pageblock order.
In this case, isolation would fail due to an invalid range parameter.
To prevent this, this patch implements skipping the range until a proper
target pageblock is met. Without this patch, CMA with more than
pageblock order always fails but with this patch it will succeed.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
zram could kunmap_atomic() a NULL pointer in a rare situation: a zram
page becomes a full-zeroed page after a partial write io. The current
code doesn't handle this case and performs kunmap_atomic() on a NULL
pointer, which panics the kernel.
This patch fixes this issue.
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Weijie Yang <weijie.yang.kh@gmail.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The kuser helpers page is not set up on non-MMU systems, so it does
not make sense to allow CONFIG_KUSER_HELPERS to be enabled when
CONFIG_MMU=n. Allowing it to be set on !MMU results in an oops in
set_tls (used in execve and the arm_syscall trap handler):
Unhandled exception: IPSR = 00000005 LR = fffffff1
CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.0-rc1-00041-ga30465a #216
task: 8b838000 ti: 8b82a000 task.ti: 8b82a000
PC is at flush_thread+0x32/0x40
LR is at flush_thread+0x21/0x40
pc : [<8f00157a>] lr : [<8f001569>] psr: 4100000b
sp : 8b82be20 ip : 00000000 fp : 8b83c000
r10: 00000001 r9 : 88018c84 r8 : 8bb85000
r7 : 8b838000 r6 : 00000000 r5 : 8bb77400 r4 : 8b82a000
r3 : ffff0ff0 r2 : 8b82a000 r1 : 00000000 r0 : 88020354
xPSR: 4100000b
CPU: 0 PID: 1 Comm: swapper Not tainted 3.18.0-rc1-00041-ga30465a #216
[<8f002bc1>] (unwind_backtrace) from [<8f002033>] (show_stack+0xb/0xc)
[<8f002033>] (show_stack) from [<8f00265b>] (__invalid_entry+0x4b/0x4c)
As best I can tell this issue existed for the set_tls ARM syscall
before commit fbfb872f5f "ARM: 8148/1: flush TLS and thumbee
register state during exec" consolidated the TLS manipulation code
into the set_tls helper function, but now that we're using it to flush
register state during execve, !MMU users encounter the oops at the
first exec.
Prevent CONFIG_MMU=n configurations from enabling
CONFIG_KUSER_HELPERS.
Fixes: fbfb872f5f (ARM: 8148/1: flush TLS and thumbee register state during exec)
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Reported-by: Stefan Agner <stefan@agner.ch>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
To speed up decompression, the decompressor sets up a flat, cacheable
mapping of memory. However, when there is insufficient space to hold
the page tables for this mapping, we don't bother to enable the caches
and subsequently skip all the cache maintenance hooks.
Skipping the cache maintenance before jumping to the relocated code
allows the processor to predict the branch and populate the I-cache
with stale data before the relocation loop has completed (since a
bootloader may have SCTLR.I set, which permits normal, cacheable
instruction fetches regardless of SCTLR.M).
This patch moves the cache maintenance check into the maintenance
routines themselves, allowing the v6/v7 versions to invalidate the
I-cache regardless of the MMU state.
Cc: <stable@vger.kernel.org>
Reported-by: Marc Carino <marc.ceeeee@gmail.com>
Tested-by: Julien Grall <julien.grall@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is a single patch that fixes the VBLANK machinery after:
7ffd7a6851 drm: Always reject drm_vblank_get() after drm_vblank_off()
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJUZK8FAAoJEN0jrNd/PrOh4nYP/jmwr7NEyXTLe1KK+Q1oynRf
hh0UtInwq8Y6EffORBlues3wwutXS6dgPwaHuwkytefYBpYmQyWBjQwQoutG7ZQ8
rnwca+bF9M3RvGe4J+hfPIFacIMY2HRatk34vGrNQqy5jlgXxlEo3PRMgGscK/P1
ZQla039fioszgk3MAQRJZKQRXFqJmw51hIJPorAtM4m7rj0NlTpgW4kO/xdo684C
ZDKJGjihWuVTexsSN5H+Q6J73oAnoSXMCnev/bfMXEUr6M4ybij+iGmvELxTxE29
zf0rjO57H4ir9Y6n+rm30492GiipEO1Z+yYJi551Fh86ItOXmF8uyP5GEr5qWxw2
SHXvKp7PieMalsYiXwWpxvg82XTywkbi2f8JWRTQ6zjhmpKcIbUYgyUk3GZl9n/E
dd2ikOyWaGw+UNKy9UqZ0DhJUdigIoxn7AK5MsqbM9HkiFVX35g2Bf3IE10D7kxR
gdU4j4gbCB+7nnWpo+ZaUFueOYHMnXb3pwi9A6CJN8QXIfgKZCfp2mEEFX1Bd/A8
hGxtv0yuhN1MzF2J7A2/1QMLTVXunLjQ8dtQwkJPZKUuJqouAKA8rG8CqlhDlxG2
Gj8SBDhsbTl/hQ+E+BpPONTioqrGUsYr7DDaMTpfHt/OV/XNYmpr+N1tcdEXmgP1
siv74ApcdBvC97zbcjkl
=oKTG
-----END PGP SIGNATURE-----
Merge tag 'drm/tegra/for-3.18-rc5' of git://people.freedesktop.org/~tagr/linux into drm-fixes
drm/tegra: Fixes for v3.18-rc5
This is a single patch that fixes the VBLANK machinery after:
7ffd7a6851 drm: Always reject drm_vblank_get() after drm_vblank_off()
* tag 'drm/tegra/for-3.18-rc5' of git://people.freedesktop.org/~tagr/linux:
drm/tegra: dc: Add missing call to drm_vblank_on()
One modesetting, one gk20a fix.
* 'linux-3.18' of git://anongit.freedesktop.org/git/nouveau/linux-2.6:
drm/nouveau/nv50/disp: Fix modeset on G94
drm/gk20a/fb: fix setting of large page size bit
one regression fix.
* tag 'drm-intel-fixes-2014-11-13' of git://anongit.freedesktop.org/drm-intel:
drm/i915: Fix obj->map_and_fenceable across tiling changes
Currently, we only match against local port number in order to reuse
socket. But if this new vxlan wants an IPv6 socket and a IPv4 one bound
to that port, vxlan will reuse an IPv4 socket as IPv6 and a panic will
follow. The following steps reproduce it:
# ip link add vxlan6 type vxlan id 42 group 229.10.10.10 \
srcport 5000 6000 dev eth0
# ip link add vxlan7 type vxlan id 43 group ff0e::110 \
srcport 5000 6000 dev eth0
# ip link set vxlan6 up
# ip link set vxlan7 up
<panic>
[ 4.187481] BUG: unable to handle kernel NULL pointer dereference at 0000000000000058
...
[ 4.188076] Call Trace:
[ 4.188085] [<ffffffff81667c4a>] ? ipv6_sock_mc_join+0x3a/0x630
[ 4.188098] [<ffffffffa05a6ad6>] vxlan_igmp_join+0x66/0xd0 [vxlan]
[ 4.188113] [<ffffffff810a3430>] process_one_work+0x220/0x710
[ 4.188125] [<ffffffff810a33c4>] ? process_one_work+0x1b4/0x710
[ 4.188138] [<ffffffff810a3a3b>] worker_thread+0x11b/0x3a0
[ 4.188149] [<ffffffff810a3920>] ? process_one_work+0x710/0x710
So address family must also match in order to reuse a socket.
Reported-by: Jean-Tsung Hsiao <jhsiao@redhat.com>
Signed-off-by: Marcelo Ricardo Leitner <mleitner@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With commit be9dad1f9f ("net: phy: suspend phydev when going
to HALTED"), the PHY device will be put in a low-power mode using
BMCR_PDOWN if the the interface is set down. The smsc911x driver does
a software_reset opening the device driver (ndo_open). In such case,
the PHY must be powered-up before access to any register and before
calling the software_reset function. Otherwise, as the PHY is powered
down the software reset fails and the interface can not be enabled
again.
This patch fixes this scenario that is easy to reproduce setting down
the network interface and setting up again.
$ ifconfig eth0 down
$ ifconfig eth0 up
ifconfig: SIOCSIFFLAGS: Input/output error
Signed-off-by: Enric Balletbo i Serra <eballetbo@iseebcn.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
My editor spewed garbage that looked like memory corruption on
my screen. It turns out that a number of occurences of "fi" got
turned into a ligature.
This patch replaces these ligatures with the ASCII letters "fi".
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cheers,
Acked-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Increased delay in the smsc911x_phy_disable_energy_detect (from 1ms to 2ms).
Dropped delays in the smsc911x_phy_enable_energy_detect (100ms and 1ms).
The patch affect SMSC LAN generation 4 chips with integrated PHY (LAN9221).
I saw problems with soft reset due to wrong udelay timings.
After I fixed udelay, I measured the time needed to bring integrated PHY
from power-down to operational mode (the time beetween clearing EDPWRDOWN
bit and soft reset complete event). I got 1ms (measured using ktime_get).
The value is equal to the current value (1ms) used in the
smsc911x_phy_disable_energy_detect. It is near the upper bound and in order
to avoid rare soft reset faults it is doubled (2ms).
I don't know official timing for bringing up integrated PHY as specs doesn't
clarify this (or may be I didn't found).
It looks safe to drop delays before and after setting EDPWRDOWN bit
(enable PHY power-down mode). I didn't saw any regressions with the patch.
The patch was reviewed by Steve Glendinning and Microchip Team.
Signed-off-by: Alexander Kochetkov <al.kochet@gmail.com>
Acked-by: Steve Glendinning <steve.glendinning@shawell.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The patch affect SMSC LAN generation 4 chips with integrated PHY (LAN9221).
It is possible that PHY could enter power-down mode (ENERGYON clear),
between ENERGYON bit check in smsc911x_phy_disable_energy_detect and SRST
bit set in smsc911x_soft_reset. This could happen, for example, if someone
disconnect ethernet cable between the checks. The PHY in a power-down mode
would prevent the MAC portion of chip to be software reseted.
Initially found by code review, confirmed later using test case.
This is low probability issue, and in order to reproduce it you have to
run the script:
while true; do
ifconfig eth0 down
ifconfig eth0 up || break
done
While the script is running you have to plug/unplug ethernet cable many
times (using gpio controlled ethernet switch, for example) until get:
[ 4516.477783] ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 4516.512207] smsc911x smsc911x.0: eth0: SMSC911x/921x identified at 0xce006000, IRQ: 336
[ 4516.524658] ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 4516.559082] smsc911x smsc911x.0: eth0: SMSC911x/921x identified at 0xce006000, IRQ: 336
[ 4516.571990] ADDRCONF(NETDEV_UP): eth0: link is not ready
ifconfig: SIOCSIFFLAGS: Input/output error
The patch was reviewed by Steve Glendinning and Microchip Team.
Signed-off-by: Alexander Kochetkov <al.kochet@gmail.com>
Acked-by: Steve Glendinning <steve.glendinning@shawell.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
kick_requests() can put linger requests on the notarget list. This
means we need to clear the much-overloaded req->r_req_lru_item in
__unregister_linger_request() as well, or we get an assertion failure
in ceph_osdc_release_request() - !list_empty(&req->r_req_lru_item).
AFAICT the assumption was that registered linger requests cannot be on
any of req->r_req_lru_item lists, but that's clearly not the case.
Signed-off-by: Ilya Dryomov <idryomov@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Requests have to be unlinked from both osd->o_requests (normal
requests) and osd->o_linger_requests (linger requests) lists when
clearing req->r_osd. Otherwise __unregister_linger_request() gets
confused and we trip over a !list_empty(&osd->o_linger_requests)
assert in __remove_osd().
MON=1 OSD=1:
# cat remove-osd.sh
#!/bin/bash
rbd create --size 1 test
DEV=$(rbd map test)
ceph osd out 0
sleep 3
rbd map dne/dne # obtain a new osdmap as a side effect
rbd unmap $DEV & # will block
sleep 3
ceph osd in 0
Signed-off-by: Ilya Dryomov <idryomov@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
TID of cap flush ack is 64 bits, but ceph_inode_info::flushing_cap_tid
is only 16 bits. 16 bits should be plenty to let the cap flush updates
pipeline appropriately, but we need to cast in the proper direction when
comparing these differently-sized versions. So downcast the 64-bits one
to 16 bits.
Reflects ceph.git commit a5184cf46a6e867287e24aeb731634828467cd98.
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Reviewed-by: Ilya Dryomov <idryomov@redhat.com>
The branches of the if (i->type & ITER_BVEC) statement in
iov_iter_single_seg_count() are the wrong way around; if ITER_BVEC is
clear then we use i->bvec, when we should be using i->iov. This fixes
it.
In my case, the symptom that this caused was that a KVM guest doing
filesystem operations on a virtual disk would result in one of qemu's
threads on the host going into an infinite loop in
generic_perform_write(). The loop would hit the copied == 0 case and
call iov_iter_single_seg_count() to reduce the number of bytes to try
to process, but because of the error, iov_iter_single_seg_count()
would just return i->count and the loop made no progress and continued
forever.
Cc: stable@vger.kernel.org # 3.16+
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Things get calming down, now we have only a few fix patches:
a trivial fix for memory leak in usb-audio, a patch for the new
HD-audio PCI id, a device-specific mute-LED fix, and a slightly big
patch to cover the missing COEF inits of various Realtek codecs.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJUZJv4AAoJEGwxgFQ9KSmkcmwP/0o2Z6B/N60dLHMW6kMQ1+KY
Zr5dJzFMp195NALBjdDcQ6Q5ETJtkTc71wCUVcrfraO0yh9FzvnwN6fhfHI+/Mck
h2SNqPfokpdgQgG6wN8RP6Lp/Fi34D3p2fcSK28YrXuU1AruHLp5TQ1qrO0WMLqV
1Qm0TOWJXaPdC+/Z8HWgzd9Qdb+Tig8/fM72cQrKrv5QdLEZ6+wMTT0mgYJ3sXMa
PDlLp4VaO+eUMVwbg3TB0hi3S9KJoO5a0oO3R2GWm3USmGkoVUVyJ4RoWKOomha3
B873c4/zUXUjQdlloi5yNMcwTPy8fQpFXJDkUatporlKjzmQXCp7HPfYkg8y9U5J
anD0Mk8vwchgjpqyIdHR/hf4CCN1fPq/TgWq//LRtELgaAA4zMKq0SCElUTsT+0l
BF5filemUDWFMY86Urr4tB1lOj3qRfo9spiC8Tt64/tqHDoDbANA853wvBQbsxxO
P3mvdDlz+WotNaEB5HdxxXdmrV6V8PnnsNF3Z3JZ/7M7sqssBaGwfnB3EJIY89eF
ACAlaOnGoV1YJBuJ+ans2xhgGaJPbpAl7POI5/tB0xIVb6esBgeVNuCSRKLwPie4
HdJ0R7XobKJqX5A8e5L8SlJUSG5pebdCVwl97hXw18L14Ml5U0GiK+3c5uC4FzkV
iUQ3EyijgsPqRKK3THSk
=EhoH
-----END PGP SIGNATURE-----
Merge tag 'sound-3.18-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound fixes from Takashi Iwai:
"Things get calming down, now we have only a few fix patches: a trivial
fix for memory leak in usb-audio, a patch for the new HD-audio PCI id,
a device-specific mute-LED fix, and a slightly big patch to cover the
missing COEF inits of various Realtek codecs"
* tag 'sound-3.18-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
ALSA: hda - Add mute LED control for Lenovo Ideapad Z560
ALSA: hda/realtek - Change EAPD to verb control
ALSA: usb-audio: Fix memory leak in FTU quirk
ALSA: hda_intel: Add DeviceIDs for Sunrise Point-LP
Pull SELinux fixlet from James Morris:
"WARN_ONCE() here will unnecessarily terrify users"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
selinux: convert WARN_ONCE() to printk() in selinux_nlmsg_perm()
Pull audit fixes from Paul Moore:
"After he sent the initial audit pull request for 3.18, Eric asked me
to take over the management of the audit tree, hence this pull request
to fix a couple of problems with audit.
As you can see below, the changes are minimal: adding some whitespace
to a string so userspace parses it correctly, and fixing a problem
with audit's usage of fsnotify that was causing audit watch rules to
be lost. Neither of these patches were very controversial on the
mailing lists and they fix real problems, getting them into 3.18 would
be a good thing"
* 'stable-3.18' of git://git.infradead.org/users/pcmoore/audit:
audit: keep inode pinned
audit: AUDIT_FEATURE_CHANGE message format missing delimiting space
. stable fix for a dm-cache related bug in dm-btree walking code that
results from using very large fast device (e.g. 4T) with a very small
cache blocksize (e.g. 32K) -- this is a very uncommon configuration
. a couple fixes for dm-raid (one for stable and the other addresses a
crash in 3.18-rc1 code)
. stable fix for dm-thinp that addresses a very rare dm-bufio bug having
to do with memory reclaimation (via shrinker) when using dm-thinp
ontop of loopback devices
. fix a leak in dm-stripe target constructor's error path
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUY/p7AAoJEMUj8QotnQNaxPEIAJsmJC5ujQAIdm5yUxsOWruU
Y/36HbPvlmV8fgWqGyjaubBrzqgWry/yW/u/Sv9+9rE3Zh6JSVLVrCA6uZZ3Yr+j
HKYEPjm/O0zVJepfEDKtjG6dxeaql47+luwU1iP1bAYeZE3zmKn1oFT2GW5gTbxO
2n3MiN/dyX8v0cTw6r0O69luIAu93CSY0XDk+1ynfKlKKVmgcAUPvKuobF+yHXoF
Rd7KTqFoK6HgRhdUHvUQnCGDandZ9MHjt3oW9p3dv3ezvW1cNUARoVHMRGG6Awfu
WZkQ/VORDeaJT+bhjGfPIla1HbgxEKJrgzTUlpj+P6K2uPK2f6ECEyBpDLWKy9g=
=lkSu
-----END PGP SIGNATURE-----
Merge tag 'dm-3.18-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:
- stable fix for dm-thin that avoids normal IO racing with discard
- stable fix for a dm-cache related bug in dm-btree walking code that
results from using very large fast device (eg 4T) with a very small
cache blocksize (eg 32K) -- this is a very uncommon configuration
- a couple fixes for dm-raid (one for stable and the other addresses a
crash in 3.18-rc1 code)
- stable fix for dm-thinp that addresses a very rare dm-bufio bug
having to do with memory reclaimation (via shrinker) when using
dm-thinp ontop of loopback devices
- fix a leak in dm-stripe target constructor's error path
* tag 'dm-3.18-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm btree: fix a recursion depth bug in btree walking code
dm thin: grab a virtual cell before looking up the mapping
dm raid: fix inaccessible superblocks causing oops in configure_discard_support
dm raid: ensure superblock's size matches device's logical block size
dm bufio: change __GFP_IO to __GFP_FS in shrinker callbacks
dm stripe: fix potential for leak in stripe_ctr error path
pfns are unsigned long, but PHYS_PFN_OFFSET is phys_addr_t. This leads
to page_to_pfn() returning phys_addr_t which cause type mismatches in
some print statements.
Signed-off-by: Neil Zhang <zhangwm@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When experimenting with patches to provide kprobes support for aarch64
smp machines would hang when inserting breakpoints into kernel code.
The hangs were caused by a race condition in the code called by
aarch64_insn_patch_text_sync(). The first processor in the
aarch64_insn_patch_text_cb() function would patch the code while other
processors were still entering the function and incrementing the
cpu_count field. This resulted in some processors never observing the
exit condition and exiting the function. Thus, processors in the
system hung.
The first processor to enter the patching function performs the
patching and signals that the patching is complete with an increment
of the cpu_count field. When all the processors have incremented the
cpu_count field the cpu_count will be num_cpus_online()+1 and they
will return to normal execution.
Fixes: ae16480785 arm64: introduce interfaces to hotpatch kernel and module code
Signed-off-by: William Cohen <wcohen@redhat.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARM64 currently doesn't fix up faults on the single-byte (strb) case of
__clear_user... which means that we can cause a nasty kernel panic as an
ordinary user with any multiple PAGE_SIZE+1 read from /dev/zero.
i.e.: dd if=/dev/zero of=foo ibs=1 count=1 (or ibs=65537, etc.)
This is a pretty obscure bug in the general case since we'll only
__do_kernel_fault (since there's no extable entry for pc) if the
mmap_sem is contended. However, with CONFIG_DEBUG_VM enabled, we'll
always fault.
if (!down_read_trylock(&mm->mmap_sem)) {
if (!user_mode(regs) && !search_exception_tables(regs->pc))
goto no_context;
retry:
down_read(&mm->mmap_sem);
} else {
/*
* The above down_read_trylock() might have succeeded in
* which
* case, we'll have missed the might_sleep() from
* down_read().
*/
might_sleep();
if (!user_mode(regs) && !search_exception_tables(regs->pc))
goto no_context;
}
Fix that by adding an extable entry for the strb instruction, since it
touches user memory, similar to the other stores in __clear_user.
Signed-off-by: Kyle McMartin <kyle@redhat.com>
Reported-by: Miloš Prchlík <mprchlik@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Use phys_addr_t for physical address in alloc_init_pud. Although
phys_addr_t and unsigned long are 64 bit in arm64, it is better
to use phys_addr_t to describe physical addresses.
Signed-off-by: Min-Hua Chen <orca.chen@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
While efi-entry.S mentions that efi_entry() will have relocated the
kernel image, it actually means that efi_entry will have placed a copy
of the kernel in the appropriate location, and until this is branched to
at the end of efi_entry.S, all instructions are executed from the
original image.
Thus while the flush in efi_entry.S does ensure that the copy is visible
to noncacheable accesses, it does not guarantee that this is true for
the image instructions are being executed from. This could have
disasterous effects when the MMU and caches are disabled if the image
has not been naturally evicted to the PoC.
Additionally, due to a missing dsb following the ic ialluis, the new
kernel image is not necessarily clean in the I-cache when it is branched
to, with similar potentially disasterous effects.
This patch adds additional flushing to ensure that the currently
executing stub text is flushed to the PoC and is thus visible to
noncacheable accesses. As it is placed after the instructions cache
maintenance for the new image and __flush_dcache_area already contains a
dsb, we do not need to add a separate barrier to ensure completion of
the icache maintenance.
Comments are updated to clarify the situation with regard to the two
images and the maintenance required for both.
Fixes: 3c7f255039
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Joel Schopp <joel.schopp@amd.com>
Reviewed-by: Roy Franz <roy.franz@linaro.org>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Ian Campbell <ijc@hellion.org.uk>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
When the CRTC is enabled, make sure the VBLANK machinery is enabled.
Failure to do so will cause drm_vblank_get() to not enable the VBLANK on
the CRTC and VBLANK-synchronized page-flips won't work.
While at it, get rid of the legacy drm_vblank_pre_modeset() and
drm_vblank_post_modeset() calls that are replaced by drm_vblank_on()
and drm_vblank_off().
Reported-by: Alexandre Courbot <acourbot@nvidia.com>
Tested-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Lenovo Ideapad Z560 has a mute LED that is controlled via EAPD pin
0x1b on CX20585 codec. (EAPD bit on corresponds to mute LED on.)
The machine doesn't need other EAPD, so the fixup concentrates on
controlling EAPD 0x1b following the vmaster state (but inversely).
Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=665315
Reported-by: Szymon Kowalczyk <fazerxlo@o2.pl>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Commit f5866db6 (virtio_console: enable VQs early) tried to make
sure that DRIVER_OK was set when virtio_console started using its
virtqueues. Doing this in add_port(), however, means that we try
to set DRIVER_OK again when when a port is dynamically added after
the probe function is done.
Let's move virtio_device_ready() to the probe function just before
trying to use the virtqueues instead. This is fine as nothing can
fail inbetween.
Reported-by: Thomas Graf <tgraf@suug.ch>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Commit 1dce626404 introduced a regression
spotted on several G94 (FDObz #85160). This device seems to expect the
vblank period to be set after setting scale instead of before.
V2: shove this in a separate function
This is a candidate bug-fix for 3.18
Signed-off-by: Roy Spliet <rspliet@eclipso.eu>
Tested-by: Zlatko Calusic <zcalusic@bitsync.net>
Tested-by: Michael Riesch <michael@riesch.at>
Tested-by: "poma" <pomidorabelisima@gmail.com>
Tested-by: Adam Williamson <adamw@happyassassin.net>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Commit "ltc/gf100-: fix cbc issues on certain boards" moved the setting
of the large page size bit from bar/nvc0 to fb/nvc0. GK20A uses its own
FB device and the change was thus not applied to it - fix this.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
calming down on the KVM front at last.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJUYhy6AAoJEL/70l94x66DZqoIAIw85ikZG9ZewjwbttRhQf/l
8a3iAzY3fChNDvJInVWpmpuZV3gkPcf0ISKvh5bh7VvirdxtpgaR+fTUeK7YxuNP
z8H6StnSM6rtmwsq5vwWWX6lCFOEEnkKJQidi51/o7G7O4EvxFwclTp3WKAeuPTs
7WmAnofDVUkfXK96DeoOKiWP7jkN97Q2lLcSaogP/jiKSKjMT95ZTA+E0fZNhE18
8pMZhrI7xrgEc0LQiEki8M9hNT+EBsZfhFjdLuuPP7KbfkZtTxPqdfS0VltAx95x
5xcr3fTANCqmmW8S2lV0Jbi35Na0roG1wbNhky39Iqe5i7xJ98zSwzW86dCbCI8=
=ITDE
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"Two fixes --- one of them not exactly a one liner, but things are
calming down on the KVM front at last"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: Fix uninitialized op->type for some immediate values
KVM: s390: virtio_ccw: remove unused variable
- fix umount syscall;
- fix ISS and xtfpga Kconfig dependencies so that more randconfigs are buildable
- add seccomp, getrandom, and memfd_create syscalls;
- add defconfigs for KC705 and SMP LX200;
- implement pgprot_noncached.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJUYHVBAAoJEI9vqH3mFV2s/vIP/A31rtOg4M7y8vEbomXhUzb6
0qfnwf4eCHvXHMROZsRVCpsm0f1x0hpfmGNEUn+TE+a/nY86gMW2/tFfFQAr3jHS
kvgyUYD55RTYdB1/+ws6yrqPAcEB2IaPPeG3qwlIE3fgUoLPffysuhu5Mq5F9DjZ
gbVoehfipNIqzNTx5Ua01etCFqu90mqAwOPJrGRCjNl9GFIq3MsbtHrCsJSWHVYi
OwI+O6R3ZTt21yov8qzkpykg+tzqJoDyhPqljUSmgPXm8NvnqJef+RC9xuSPuFjT
PCcP7k+/g5bNWdD67W75iSi5zkpsz3uECYan8k6Uyi+pMTxCOc/5CaYJ4P+dzfz6
ke6QqXuU85is86XF7iDjkm4v7Tm6VHlhGhmObljWGWtnLVi45ZYRBaLXC1KhV2Pp
nuknzb9SdZtclYAFR65UxgrKW2IWYL4tbrme+UZKmUIwbmJceE6jrpwF/JaJXXh2
GfQQ/i72BFjoBDQY/xq8VGW2WIa6uNpjaIC66L1JzJNnwGaIVCL0+h6MU8GOT8Hu
Y0GiLj625T7f5awuDJ65m3LxjB1Ltq37JZ9vIRPIzwnqjdCQJGgupxEIkswA+XMk
J46Nw7ctnTDVoSP/oUzaXxv+3t+Lrd/1qr30TH8q11PTysRrz0LuJ3UMWuvn0ZJG
4JYGqE4oPhrCPAvbIgJD
=XwDs
-----END PGP SIGNATURE-----
Merge tag 'xtensa-20141109' of git://github.com/czankel/xtensa-linux
Pull Xtensa fixes from Chris Zankel:
- fix umount syscall
- fix ISS and xtfpga Kconfig dependencies so that more randconfigs are
buildable
- add seccomp, getrandom, and memfd_create syscalls
- add defconfigs for KC705 and SMP LX200
- implement pgprot_noncached
* tag 'xtensa-20141109' of git://github.com/czankel/xtensa-linux:
xtensa: xtfpga: add lx200 SMP DTS and defconfig
xtensa: xtfpga: add generic KC705 board config
xtensa: re-wire umount syscall to sys_oldumount
xtensa: xtfpga: only select ethoc when ethernet is available
xtensa: add seccomp, getrandom, and memfd_create syscalls
xtensa: ISS: add BLOCK dependency to BLK_DEV_SIMDISK
xtensa: implement pgprot_noncached
xtensa/uapi: Add definition of TIOC[SG]RS485
atom scratch register race fix.
* 'drm-fixes-3.18' of git://people.freedesktop.org/~agd5f/linux:
drm/radeon: add locking around atombios scratch space usage
Now exynos drm driver incurs infinite loop issue on multi-platform
reported by Matwey V.Korniliv like below,
http://comments.gmane.org/gmane.comp.video.dri.devel/117622
This issue is because non kms drivers enabled are probed before
a component master tries to bring up. This patch set resolves
the infinite loop issue and also includes fixups relevant to exynos
drm internal issues.
* 'exynos-drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos:
drm/exynos: fix possible infinite loop issue
drm/exynos: g2d: fix null pointer dereference
drm/exynos: resolve infinite loop issue on non multi-platform
drm/exynos: resolve infinite loop issue on multi-platform
Pull crypto fixes from Herbert Xu:
- stack corruption fix for pseries hwrng driver
- add missing DMA unmap in caam crypto driver
- fix NUMA crash in qat crypto driver
- fix buggy mapping of zero-length associated data in qat crypto driver
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
hwrng: pseries - port to new read API and fix stack corruption
crypto: caam - fix missing dma unmap on error path
crypto: qat - Enforce valid numa configuration
crypto: qat - Prevent dma mapping zero length assoc data
Any attempt to call nfs_remove_bad_delegation() while a delegation is being
returned is currently a no-op. This means that we can end up looping
forever in nfs_end_delegation_return() if something causes the delegation
to be revoked.
This patch adds a mechanism whereby the state recovery code can communicate
to the delegation return code that the delegation is no longer valid and
that it should not be used when reclaiming state.
It also changes the return value for nfs4_handle_delegation_recall_error()
to ensure that nfs_end_delegation_return() does not reattempt the lock
reclaim before state recovery is done.
http://lkml.kernel.org/r/CAN-5tyHwG=Cn2Q9KsHWadewjpTTy_K26ee+UnSvHvG4192p-Xw@mail.gmail.com
Cc: stable@vger.kernel.org
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
in the kernel. The splice logic wants a full page from the ring buffer
but the ring_buffer_wait() returns when there's any data in the ring buffer.
The splice code would then continue the loop waiting for a full page.
But if a full page never happens, the splice code will never sleep and
just continue to loop.
There's another case that Rabin fixed that could loop if there's no memory
and kmalloc() constantly returns NULL.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJUYgiFAAoJEEjnJuOKh9ldRuoIAMHOx/TgIYSPuVQBWuljoELZ
Mbaq5EGPkJ0fdwP9+X3JPG7pFazcP8+xZx7iVKYihazgS7BkF/khbxsgPl5ZSGOf
j39kSoWK1ZzKIbM3MMjIBgZ2LF8wL1VoRu/dyI7GXWeBt9Dnj7vDtkoSGCYjDJ9B
UBK2E3vjwNxc4Z9U3YRZj7U4GEKwMkpddKv0DIfAmzA4tF1CryuGmvpkRtGi6wc0
vs6OV1jqFa300v8ckFvTrO/UdBVnisVWHmBrP6XXB/Likz/6+56pphCRvoc/LskG
kFHCsjXXJ/tI+/RdhQt/dqgKDl7Cs3nIhXwMZ/TbaxGdFT6kbq3xbVpRk1L90/U=
=v5/I
-----END PGP SIGNATURE-----
Merge tag 'trace-fixes-v3.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fix from Steven Rostedt:
"Rabin Vincent found a way that tracing could cause an infinite loop in
the kernel. The splice logic wants a full page from the ring buffer
but the ring_buffer_wait() returns when there's any data in the ring
buffer. The splice code would then continue the loop waiting for a
full page. But if a full page never happens, the splice code will
never sleep and just continue to loop.
There's another case that Rabin fixed that could loop if there's no
memory and kmalloc() constantly returns NULL"
* tag 'trace-fixes-v3.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Do not risk busy looping in buffer splice
tracing: Do not busy wait in buffer splice
This patch removes the assumption made previously, that we only need to
check the delegation stateid when it matches the stateid on a cached
open.
If we believe that we hold a delegation for this file, then we must assume
that its stateid may have been revoked or expired too. If we don't test it
then our state recovery process may end up caching open/lock state in a
situation where it should not.
We therefore rename the function nfs41_clear_delegation_stateid as
nfs41_check_delegation_stateid, and change it to always run through the
delegation stateid test and recovery process as outlined in RFC5661.
http://lkml.kernel.org/r/CAN-5tyHwG=Cn2Q9KsHWadewjpTTy_K26ee+UnSvHvG4192p-Xw@mail.gmail.com
Cc: stable@vger.kernel.org
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>