Tobias Regnery says:
====================
alx: fix fallout from multi queue conversion
Here are 3 fixes for the multi queue conversion in v4.10.
The first patch fixes a wrong condition in an if statement.
Patches 2 and 3 fixes regressions in the corner case when requesting msi-x
interrupts fails and we fall back to msi or legacy interrupts.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
If requesting msi-x interrupts fails in alx_request_irq we fall back to
a single tx queue and msi or legacy interrupts.
Currently the adapter stops working in this case and we get tx watchdog
timeouts. For reasons unknown the adapter gets confused when we load the
dma adresses to the chip in alx_init_ring_ptrs twice: the first time with
multiple queues and the second time in the fallback case with a single
queue.
To fix this move the the call to alx_reinit_rings (which calls
alx_init_ring_ptrs) after alx_request_irq. At this time it is clear how
much tx queues we have and which dma addresses we use.
Fixes: d768319cd4 ("alx: enable multiple tx queues")
Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If requesting msi-x interrupts fails we should fall back to msi or
legacy interrupts. However alx_realloc_ressources don't call
alx_init_intr, so we fail to set the right number of tx queues.
This results in watchdog timeouts and a nonfunctional adapter.
Fixes: d768319cd4 ("alx: enable multiple tx queues")
Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The condition to free the descriptor memory is wrong, we want to free the
memory if it is set and not if it is unset. Invert the test to fix this
issue.
Fixes: b0999223f224b ("alx: add ability to allocate and free alx_napi structures")
Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Another rebranded Novatel E371. qmi_wwan should drive this device, while
cdc_ether should ignore it. Even though the USB descriptors are plain
CDC-ETHER that USB interface is a QMI interface. Ref commit 7fdb7846c9
("qmi_wwan/cdc_ether: add device IDs for Dell 5804 (Novatel E371) WWAN
card")
Cc: Dan Williams <dcbw@redhat.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
sb_dirblklog is added to sb_blocklog to compute the directory block size
in bytes. Therefore, we must compare the sum of both those values
against XFS_MAX_BLOCKSIZE_LOG, not just dirblklog.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Pull namespace fix from Eric Biederman:
"This has a single brown bag fix.
The possible deadlock with dec_pid_namespaces that I had thought was
fixed earlier turned out only to have been moved. So instead of being
cleaver this change takes ucounts_lock with irqs disabled. So
dec_ucount can be used from any context without fear of deadlock.
The items accounted for dec_ucount and inc_ucount are all
comparatively heavy weight objects so I don't exepct this will have
any measurable performance impact"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
userns: Make ucounts lock irq-safe
When using the ibmveth driver in a KVM/QEMU based VM, it currently
always prints out a scary error message like this when it is started:
ibmveth 71000003 (unregistered net_device): unable to change
checksum offload settings. 1 rc=-2 ret_attr=71000003
This happens because the driver always tries to enable the checksum
offloading without checking for the availability of this feature first.
QEMU does not support checksum offloading for the spapr-vlan device,
thus we always get the error message here.
According to the LoPAPR specification, the "ibm,illan-options" property
of the corresponding device tree node should be checked first to see
whether the H_ILLAN_ATTRIUBTES hypercall and thus the checksum offloading
feature is available. Thus let's do this in the ibmveth driver, too, so
that the error message is really only limited to cases where something
goes wrong, and does not occur if the feature is just missing.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mac aging is applicable only for dynamically learnt remote mac
entries. Check for user configured static remote mac entries
and skip aging.
Signed-off-by: Balakrishnan Raman <ramanb@cumulusnetworks.com>
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch skips flushing static fdb entries in
ndo_stop, but flushes all fdb entries during vxlan
device delete. This is consistent with the bridge
driver fdb
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet says:
====================
ipv6: fix ip6_tnl_parse_tlv_enc_lim() issues
First patch fixes ip6_tnl_parse_tlv_enc_lim() callers,
bug added in linux-3.7
Second patch fixes ip6_tnl_parse_tlv_enc_lim() itself,
bug predates linux-2.6.12
Based on a report from Dmitry Vyukov, thanks to KASAN.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This function suffers from multiple issues.
First one is that pskb_may_pull() may reallocate skb->head,
so the 'raw' pointer needs either to be reloaded or not used at all.
Second issue is that NEXTHDR_DEST handling does not validate
that the options are present in skb->data, so we might read
garbage or access non existent memory.
With help from Willem de Bruijn.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since ip6_tnl_parse_tlv_enc_lim() can call pskb_may_pull(),
we must reload any pointer that was related to skb->head
(or skb->data), or risk use after free.
Fixes: c12b395a46 ("gre: Support GRE over IPv6")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Dmitry Kozlov <xeb@mail.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
I don't have any guests with PAGE_SIZE > 64k but the
code seems to be clearly broken in that case
as PAGE_SIZE / MERGEABLE_BUFFER_ALIGN will need
more than 8 bit and so the code in mergeable_ctx_to_buf_address
does not give us the actual true size.
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Dmitry reported a deadlock scenario:
unix_bind() path:
u->bindlock ==> sb_writer
do_splice() path:
sb_writer ==> pipe->mutex ==> u->bindlock
In the unix_bind() code path, unix_mknod() does not have to
be done with u->bindlock held, since it is a pure fs operation,
so we can just move unix_mknod() out.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Tested-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
write-back cache in degraded mode introduces corner cases to the array.
Although we try to cover all these corner cases, it is safer to just
disable write-back cache when the array is in degraded mode.
In this patch, we disable writeback cache for degraded mode:
1. On device failure, if the array enters degraded mode, raid5_error()
will submit async job r5c_disable_writeback_async to disable
writeback;
2. In r5c_journal_mode_store(), it is invalid to enable writeback in
degraded mode;
3. In r5c_try_caching_write(), stripes with s->failed>0 will be handled
in write-through mode.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Write back cache requires a complex RMW mechanism, where old data is
read into dev->orig_page for prexor, and then xor is done with
dev->page. This logic is already implemented in the write path.
However, current read path is not awared of this requirement. When
the array is optimal, the RMW is not required, as the data are
read from raid disks. However, when the target stripe is degraded,
complex RMW is required to generate right data.
To keep read path as clean as possible, we handle read path by
flushing degraded, in-journal stripes before processing reads to
missing dev.
Specifically, when there is read requests to a degraded stripe
with data in journal, handle_stripe_fill() calls
r5c_make_stripe_write_out() and exits. Then handle_stripe_dirtying()
will do the complex RMW and flush the stripe to RAID disks. After
that, read requests are handled.
There is one more corner case when there is non-overwrite bio for
the missing (or out of sync) dev. handle_stripe_dirtying() will not
be able to process the non-overwrite bios without constructing the
data in handle_stripe_fill(). This is fixed by delaying non-overwrite
bios in handle_stripe_dirtying(). So handle_stripe_fill() works on
these bios after the stripe is flushed to raid disks.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
For safer operation, all arrays start in write-through mode, which has been
better tested and is more mature. And actually the write-through/write-mode
isn't persistent after array restarted, so we always start array in
write-through mode. However, if recovery found data-only stripes before the
shutdown (from previous write-back mode), it is not safe to start the array in
write-through mode, as write-through mode can not handle stripes with data in
write-back cache. To solve this problem, we flush all data-only stripes in
r5l_recovery_log(). When r5l_recovery_log() returns, the array starts with
empty cache in write-through mode.
This logic is implemented in r5c_recovery_flush_data_only_stripes():
1. enable write back cache
2. flush all stripes
3. wake up conf->mddev->thread
4. wait for all stripes get flushed (reuse wait_for_quiescent)
5. disable write back cache
The wait in 4 will be waked up in release_inactive_stripe_list()
when conf->active_stripes reaches 0.
It is safe to wake up mddev->thread here because all the resource
required for the thread has been initialized.
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
With write back cache, we use orig_page to do prexor. This patch
makes sure we read data into orig_page for it.
Flag R5_OrigPageUPTDODATE is added to show whether orig_page
has the latest data from raid disk.
We introduce a helper function uptodate_for_rmw() to simplify
the a couple conditions in handle_stripe_dirtying().
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
If the interrupt allocation failed we should start freeing the CQ rings
rather than unregistering the netdev notifier.
Fixes: 29c8d9eba5 ("IB: Add vmw_pvrdma driver")
Signed-off-by: Adit Ranadive <aditr@vmware.com>
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
For run-on-reset SMP configs, non master cores call a routine which
waits until Master gives it a "go" signal (currently using a shared
mem flag). The same routine then jumps off the well known entry point of
all non Master cores i.e. @first_lines_of_secondary
This patch moves out the last part into one single place in early boot
code.
This is better in terms of absraction (the wait API only waits) and
returns, leaving out the "jump off to" part.
In actual implementation this requires some restructuring of the early
boot code as well as Master now jumps to BSS setup explicitly,
vs. falling thru into it before.
Technically this patch doesn't cause any functional change, it just
moves the ugly #ifdef'ry from assembly code to "C"
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
commit 3c7c7a2fc8 ("ARC: Don't use "+l" inline asm constraint")
modified the inline assembly to setup LP_COUNT register manually and NOT
rely on gcc to do it (with the +l inline assembler contraint hint, now
being retired in the compiler)
However the fix was flawed as we didn't add LP_COUNT to asm clobber list,
meaning gcc doesn't know that LP_COUNT or zero-delay-loops are in action
in the inline asm.
This resulted in some fun - as nested ZOL loops were being generared
| mov lp_count,250000 ;16 # tmp235,
| lp .L__GCC__LP14 # <======= OUTER LOOP (gcc generated)
| .L14:
| ld r2, [r5] # MEM[(volatile u32 *)prephitmp_43], w
| dmb 1
| breq r2, -1, @.L21 #, w,,
| bbit0 r2,1,@.L13 # w,,
| ld r4,[r7] ;25 # loops_per_jiffy, loops_per_jiffy
| mpymu r3,r4,r6 #, loops_per_jiffy, tmp234
|
| mov lp_count, r3 # <====== INNER LOOP (from inline asm)
| lp 1f
| nop
| 1:
| nop_s
| .L__GCC__LP14: ; loop end, start is @.L14 #,
This caused issues with drivers relying on sane behaviour of udelay
friends.
With LP_COUNT added to clobber list, gcc doesn't generate the outer
loop in say above case.
Addresses STAR 9001146134
Reported-by: Joao Pinto <jpinto@synopsys.com>
Fixes: 3c7c7a2fc8 ("ARC: Don't use "+l" inline asm constraint")
Cc: stable@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
mlxsw_sp_nexthop_group_mac_update() is called in one of two cases:
1) When the MAC of a nexthop needs to be updated
2) When the size of a nexthop group has changed
In the second case the adjacency entries for the nexthop group need to
be reallocated from the adjacency table. In this case we must write to
the entries the MAC addresses of all the nexthops that should be
offloaded and not only those whose MAC changed. Otherwise, these entries
would be filled with garbage data, resulting in packet loss.
Fixes: a7ff87acd9 ("mlxsw: spectrum_router: Implement next-hop routing")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Runtime suspend shouldn't be executed if the tx queue is not empty,
because the device is not idle.
Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The old description basically read like "ethernet-phy-idAAAA.BBBB" can
be specified when you know the actual PHY ID. However, specifying this
has a side-effect: it forces Linux to bind to a certain PHY driver (the
one that matches the ID given in the compatible string), ignoring the ID
which is reported by the actual PHY.
Whenever a device is shipped with (multiple) different PHYs during it's
production lifetime then explicitly specifying
"ethernet-phy-idAAAA.BBBB" could break certain revisions of that device.
Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Rob Herring <robh@kernel.org>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ignore value of interrupt distribution mode for common interrupts in
IDU since setting of affinity using value from Device Tree is deprecated
in ARC. Originally it is done in idu_irq_xlate() function and it is
semantically wrong and does not guaranty that an affinity value will be
set properly. idu_irq_enable() function is better place for
initialization of common interrupts.
By default send all common interrupts to all available online CPUs.
The affinity of common interrupts in IDU must be set manually since
in some cases the kernel will not call irq_set_affinity() by itself:
1. When the kernel is not configured with support of SMP.
2. When the kernel is configured with support of SMP but upper
interrupt controllers does not support setting of the affinity
and cannot propagate it to IDU.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Some nfsv4.0 servers may return a mode for the verifier following an open
with EXCLUSIVE4 createmode, but this does not mean the client should skip
setting the mode in the following SETATTR. It should only do that for
EXCLUSIVE4_1 or UNGAURDED createmode.
Fixes: 5334c5bdac ("NFS: Send attributes in OPEN request for NFS4_CREATE_EXCLUSIVE4_1")
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Cc: stable@vger.kernel.org # v4.3+
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Xuan Qi reports that the Linux NFSv4 client failed to lock a file
that was migrated. The steps he observed on the wire:
1. The client sent a LOCK request to the source server
2. The source server replied NFS4ERR_MOVED
3. The client switched to the destination server
4. The client sent the same LOCK request to the destination
server with a bumped lock sequence ID
5. The destination server rejected the LOCK request with
NFS4ERR_BAD_SEQID
RFC 3530 section 8.1.5 provides a list of NFS errors which do not
bump a lock sequence ID.
However, RFC 3530 is now obsoleted by RFC 7530. In RFC 7530 section
9.1.7, this list has been updated by the addition of NFS4ERR_MOVED.
Reported-by: Xuan Qi <xuan.qi@oracle.com>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Cc: stable@vger.kernel.org # v3.7+
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
For devices that can register page list that is bigger than
USHRT_MAX, we actually take the wrong value for sg_tablesize.
E.g: for CX4 max_fast_reg_page_list_len is 65536 (bigger than USHRT_MAX)
so we set sg_tablesize to 0 by mistake. Therefore, each IO that is
bigger than 4k splitted to "< 4k" chunks that cause performance degredation.
Remove wrong sg_tablesize assignment, and use the value that was set during
address resolution handler with the needed casting.
Cc: <stable@vger.kernel.org> # v4.5+
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
For last few months Darren and I are co-maintaining PDx86 subsystem.
Make this fact official by updating MAINTAINERS database.
Acked-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
After setting indirect_sg_entries module_param to huge value (e.g 500,000),
srp_alloc_req_data() fails to allocate indirect descriptors for the request
ring (kmalloc fails). This commit enforces the maximum value of
indirect_sg_entries to be SG_MAX_SEGMENTS as signified in module param
description.
Fixes: 65e8617fba (scsi: rename SCSI_MAX_{SG, SG_CHAIN}_SEGMENTS)
Fixes: c07d424d61 (IB/srp: add support for indirect tables that don't fit in SRP_CMD)
Cc: stable@vger.kernel.org # 4.7+
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Laurence Oberman <loberman@redhat.com>
Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>--
Signed-off-by: Doug Ledford <dledford@redhat.com>
If the device support arbitrary sg list mapping (device cap
IB_DEVICE_SG_GAPS_REG set) we allocate the memory regions with
IB_MR_TYPE_SG_GAPS.
Fixes: 509c5f33f4 ("IB/srp: Prevent mapping failures")
Cc: <stable@vger.kernel.org> # 4.7+
Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQIcBAABCgAGBQJYh3P5AAoJEGt7eEactAAdayIP/0sTMb84z7nc3pxBatEAj97A
XCn1RbIFbp6L1Q6iAT7vx6QaJvhDR9wIOJyGCY17p+A994EUri5qpgfIrvohmA70
dvD1BBdmj4vrLnJytzBmR+EYy0DO4To8LzIp9J9Ybp4jS7/Sh1zZsNrPpr79fHIj
HpOOgu6O89Jp79HCYRXjxsCZMhBLolXgpaWBFhp1afElAzTwSw/1ctnOO9Y1vMEE
dB9tpC4dpZ5WmM7lbJkjj0HkVxYrWue0QRIi8iBVcSjNBhZxTOWCFwfriS3TXnDK
XMjtZZyx2TyU/TgZxiOFTCXsT2di7ok2qlQQDYINRwQ/CnMgDWNnrpIMVJecou5I
uPPc1UAYpZ18hbpUHCKDuyB3MjlgXS03/KKprLOUCKsYChIBCa1/kN61QP402ceM
SWFX+2GQ6FXupHco5O+k38YKkLZGhSGuIhXW6c9oFZdnWjHxqKRJdl3y2dznKEbW
FJpgWc1O3uITG96VoyjxxBbTTOIR2yoGWXV36QGog82cQgbFFMeCfdSJmK+REUrs
dTkSvMWaPnkYYdwzDhGA+WI+BWSFGCNudT9IrX4TcxvBsonrtFOCiMxUsUGGHNyy
PaPBa0nrfPdksuByUGYif+GFP5LqUNOVVnLjGfkWXrBL3koGV3EhgipAY0H2OrcZ
86cFnw+DjKGv8Hhmm3lE
=3xla
-----END PGP SIGNATURE-----
Merge tag 'mac80211-for-davem-2017-01-24' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211
Johannes Berg says:
====================
A single fix, for a sleeping context problem found by LTP.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
In my previous patch, I missed that rate_control_rate_init() is
called from some places that cannot sleep, so it cannot call
ieee80211_recalc_min_chandef(). Remove that call for now to fix
the context bug, we'll have to find a different way to fix the
minimum channel width issue.
Fixes: 96aa2e7cf1 ("mac80211: calculate min channel width correctly")
Reported-by: Xiaolong Ye (via lkp-robot) <xiaolong.ye@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
The last pgtable rework silently disabled the CMMA unused state by
setting a local pte variable (a parameter) instead of propagating it
back into the caller. Fix it.
Fixes: ebde765c0e ("s390/mm: uninline ptep_xxx functions from pgtable.h")
Cc: stable@vger.kernel.org # v4.6+
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We can't dereference the dio structure after submitting the last bio for
this request, as I/O completion might have happened before the code is
run. Introduce a local is_sync variable instead.
Fixes: 542ff7bf ("block: new direct I/O implementation")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Matias Bjørling <m@bjorling.me>
Tested-by: Matias Bjørling <m@bjorling.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
Ensure that if userspace supplies insufficient data to
PTRACE_SETREGSET to fill all the registers, the thread's old
registers are preserved.
convert_vx_to_fp() is adapted to handle only a specified number of
registers rather than unconditionally handling all of them: other
callers of this function are adapted appropriately.
Based on an initial patch by Dave Martin.
Cc: stable@vger.kernel.org
Reported-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We cannot call nfs4_handle_exception() without first ensuring that the
slot has been freed. If not, we end up deadlocking with the process
waiting for recovery to complete, and recovery waiting for the slot
table to drain.
Fixes: 2e80dbe7ac ("NFSv4.1: Close callback races for OPEN, LAYOUTGET...")
Cc: stable@vger.kernel.org # v4.8+
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Currently, if the user add a stateful object with the name size exceed
NFT_OBJ_MAXNAMELEN - 1 (i.e. 31), we truncate it down to 31 silently.
This is not friendly, furthermore, this will cause duplicated stateful
objects when the first 31 characters of the name is same. So limit the
stateful object's name size to NFT_OBJ_MAXNAMELEN - 1.
After apply this patch, error message will be printed out like this:
# name_32=$(printf "%0.sQ" {1..32})
# nft add counter filter $name_32
<cmdline>:1:1-52: Error: Could not process rule: Numerical result out
of range
add counter filter QQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQ
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Also this patch cleans up the codes which missing the name size limit
validation in nftables.
Fixes: e50092404c ("netfilter: nf_tables: add stateful objects")
Signed-off-by: Liping Zhang <zlpnobody@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Pull tile bugfix from Chris Metcalf:
"This avoids an issue with short userspace reads for regset via ptrace"
* 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
tile/ptrace: Preserve previous registers for short regset write
Virtual display doesn't allocate amdgpu_encoder when initializing,
so will get invaild pointer if try to free amdgpu_encoder when
unloading driver.
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>