Граф коммитов

982141 Коммитов

Автор SHA1 Сообщение Дата
Eric Biggers 28dcca4cc0 crypto: blake2b - sync with blake2s implementation
Sync the BLAKE2b code with the BLAKE2s code as much as possible:

- Move a lot of code into new headers <crypto/blake2b.h> and
  <crypto/internal/blake2b.h>, and adjust it to be like the
  corresponding BLAKE2s code, i.e. like <crypto/blake2s.h> and
  <crypto/internal/blake2s.h>.

- Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE.

- Use a macro BLAKE2B_ALG() to define the shash_alg structs.

- Export blake2b_compress_generic() for use as a fallback.

This makes it much easier to add optimized implementations of BLAKE2b,
as optimized implementations can use the helper functions
crypto_blake2b_{setkey,init,update,final}() and
blake2b_compress_generic().  The ARM implementation will use these.

But this change is also helpful because it eliminates unnecessary
differences between the BLAKE2b and BLAKE2s code, so that the same
improvements can easily be made to both.  (The two algorithms are
basically identical, except for the word size and constants.)  It also
makes it straightforward to add a library API for BLAKE2b in the future
if/when it's needed.

This change does make the BLAKE2b code slightly more complicated than it
needs to be, as it doesn't actually provide a library API yet.  For
example, __blake2b_update() doesn't really need to exist yet; it could
just be inlined into crypto_blake2b_update().  But I believe this is
outweighed by the benefits of keeping the code in sync.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:39 +11:00
Eric Biggers a64bfe7ad4 wireguard: Kconfig: select CRYPTO_BLAKE2S_ARM
When available, select the new implementation of BLAKE2s for 32-bit ARM.
This is faster than the generic C implementation.

Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:39 +11:00
Eric Biggers 5172d322d3 crypto: arm/blake2s - add ARM scalar optimized BLAKE2s
Add an ARM scalar optimized implementation of BLAKE2s.

NEON isn't very useful for BLAKE2s because the BLAKE2s block size is too
small for NEON to help.  Each NEON instruction would depend on the
previous one, resulting in poor performance.

With scalar instructions, on the other hand, we can take advantage of
ARM's "free" rotations (like I did in chacha-scalar-core.S) to get an
implementation get runs much faster than the C implementation.

Performance results on Cortex-A7 in cycles per byte using the shash API:

	4096-byte messages:
		blake2s-256-arm:     18.8
		blake2s-256-generic: 26.0

	500-byte messages:
		blake2s-256-arm:     20.3
		blake2s-256-generic: 27.9

	100-byte messages:
		blake2s-256-arm:     29.7
		blake2s-256-generic: 39.2

	32-byte messages:
		blake2s-256-arm:     50.6
		blake2s-256-generic: 66.2

Except on very short messages, this is still slower than the NEON
implementation of BLAKE2b which I've written; that is 14.0, 16.4, 25.8,
and 76.1 cpb on 4096, 500, 100, and 32-byte messages, respectively.
However, optimized BLAKE2s is useful for cases where BLAKE2s is used
instead of BLAKE2b, such as WireGuard.

This new implementation is added in the form of a new module
blake2s-arm.ko, which is analogous to blake2s-x86_64.ko in that it
provides blake2s_compress_arch() for use by the library API as well as
optionally register the algorithms with the shash API.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:39 +11:00
Eric Biggers bbda6e0f13 crypto: blake2s - include <linux/bug.h> instead of <asm/bug.h>
Address the following checkpatch warning:

	WARNING: Use #include <linux/bug.h> instead of <asm/bug.h>

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:39 +11:00
Eric Biggers 8786841bc2 crypto: blake2s - adjust include guard naming
Use the full path in the include guards for the BLAKE2s headers to avoid
ambiguity and to match the convention for most files in include/crypto/.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers 7d87131fad crypto: blake2s - add comment for blake2s_state fields
The first three fields of 'struct blake2s_state' are used in assembly
code, which isn't immediately obvious, so add a comment to this effect.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers 42ad8cf821 crypto: blake2s - optimize blake2s initialization
If no key was provided, then don't waste time initializing the block
buffer, as its initial contents won't be used.

Also, make crypto_blake2s_init() and blake2s() call a single internal
function __blake2s_init() which treats the key as optional, rather than
conditionally calling blake2s_init() or blake2s_init_key().  This
reduces the compiled code size, as previously both blake2s_init() and
blake2s_init_key() were being inlined into these two callers, except
when the key size passed to blake2s() was a compile-time constant.

These optimizations aren't that significant for BLAKE2s.  However, the
equivalent optimizations will be more significant for BLAKE2b, as
everything is twice as big in BLAKE2b.  And it's good to keep things
consistent rather than making optimizations for BLAKE2b but not BLAKE2s.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers 8c4a93a127 crypto: blake2s - share the "shash" API boilerplate code
Add helper functions for shash implementations of BLAKE2s to
include/crypto/internal/blake2s.h, taking advantage of
__blake2s_update() and __blake2s_final() that were added by the previous
patch to share more code between the library and shash implementations.

crypto_blake2s_setkey() and crypto_blake2s_init() are usable as
shash_alg::setkey and shash_alg::init directly, while
crypto_blake2s_update() and crypto_blake2s_final() take an extra
'blake2s_compress_t' function pointer parameter.  This allows the
implementation of the compression function to be overridden, which is
the only part that optimized implementations really care about.

The new functions are inline functions (similar to those in sha1_base.h,
sha256_base.h, and sm3_base.h) because this avoids needing to add a new
module blake2s_helpers.ko, they aren't *too* long, and this avoids
indirect calls which are expensive these days.  Note that they can't go
in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S
from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency.

Finally, use these new helper functions in the x86 implementation of
BLAKE2s.  (This part should be a separate patch, but unfortunately the
x86 implementation used the exact same function names like
"crypto_blake2s_update()", so it had to be updated at the same time.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers 057edc9c8b crypto: blake2s - move update and final logic to internal/blake2s.h
Move most of blake2s_update() and blake2s_final() into new inline
functions __blake2s_update() and __blake2s_final() in
include/crypto/internal/blake2s.h so that this logic can be shared by
the shash helper functions.  This will avoid duplicating this logic
between the library and shash implementations.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers df412e7efd crypto: blake2s - remove unneeded includes
It doesn't make sense for the generic implementation of BLAKE2s to
include <crypto/internal/simd.h> and <linux/jump_label.h>, as these are
things that would only be useful in an architecture-specific
implementation.  Remove these unnecessary includes.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers 1aa90f4cf0 crypto: x86/blake2s - define shash_alg structs using macros
The shash_alg structs for the four variants of BLAKE2s are identical
except for the algorithm name, driver name, and digest size.  So, avoid
code duplication by using a macro to define these structs.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers 0d396058f9 crypto: blake2s - define shash_alg structs using macros
The shash_alg structs for the four variants of BLAKE2s are identical
except for the algorithm name, driver name, and digest size.  So, avoid
code duplication by using a macro to define these structs.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:37 +11:00
Christophe JAILLET c4ff41b93d hwrng: ingenic - Fix a resource leak in an error handling path
In case of error, we should call 'clk_disable_unprepare()' to undo a
previous 'clk_prepare_enable()' call, as already done in the remove
function.

Fixes: 406346d222 ("hwrng: ingenic - Add hardware TRNG for Ingenic X1830")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Tested-by: 周琰杰 (Zhou Yanjie) <zhouyanjie@wanyeetech.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:37 +11:00
Matthias Brugger 256693a362 hwrng: iproc-rng200 - Move enable/disable in separate function
We are calling the same code for enable and disable the block in various
parts of the driver. Put that code into a new function to reduce code
duplication.

Signed-off-by: Matthias Brugger <mbrugger@suse.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:37 +11:00
Matthias Brugger 96a6af5403 hwrng: iproc-rng200 - Fix disable of the block.
When trying to disable the block we bitwise or the control
register with value zero. This is confusing as using bitwise or with
value zero doesn't have any effect at all. Drop this as we already set
the enable bit to zero by appling inverted RNG_RBGEN_MASK.

Signed-off-by: Matthias Brugger <mbrugger@suse.com>
Acked-by: Scott Branden <scott.branden@broadcom.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:37 +11:00
Ard Biesheuvel 5318d3db46 crypto: arm64/aes-ctr - improve tail handling
Counter mode is a stream cipher chaining mode that is typically used
with inputs that are of arbitrarily length, and so a tail block which
is smaller than a full AES block is rule rather than exception.

The current ctr(aes) implementation for arm64 always makes a separate
call into the assembler routine to process this tail block, which is
suboptimal, given that it requires reloading of the AES round keys,
and prevents us from handling this tail block using the 5-way stride
that we use for better performance on deep pipelines.

So let's update the assembler routine so it can handle any input size,
and uses NEON permutation instructions and overlapping loads and stores
to handle the tail block. This results in a ~16% speedup for 1420 byte
blocks on cores with deep pipelines such as ThunderX2.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:37 +11:00
Ard Biesheuvel 15deb4333c crypto: arm64/aes-ce - really hide slower algos when faster ones are enabled
Commit 69b6f2e817 ("crypto: arm64/aes-neon - limit exposed routines if
faster driver is enabled") intended to hide modes from the plain NEON
driver that are also implemented by the faster bit sliced NEON one if
both are enabled. However, the defined() CPP function does not detect
if the bit sliced NEON driver is enabled as a module. So instead, let's
use IS_ENABLED() here.

Fixes: 69b6f2e817 ("crypto: arm64/aes-neon - limit exposed routines if ...")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:37 +11:00
Daniele Alessandrelli 5a5a27b3e1 MAINTAINERS: Add maintainers for Keem Bay OCS HCU driver
Add maintainers for the Intel Keem Bay Offload Crypto Subsystem (OCS)
Hash Control Unit (HCU) crypto driver.

Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Acked-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:37 +11:00
Daniele Alessandrelli b46f803688 crypto: keembay-ocs-hcu - Add optional support for sha224
Add optional support of sha224 and hmac(sha224).

Co-developed-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:37 +11:00
Daniele Alessandrelli ae832e329a crypto: keembay-ocs-hcu - Add HMAC support
Add HMAC support to the Keem Bay OCS HCU driver, thus making it provide
the following additional transformations:
- hmac(sha256)
- hmac(sha384)
- hmac(sha512)
- hmac(sm3)

The Keem Bay OCS HCU hardware does not allow "context-switch" for HMAC
operations, i.e., it does not support computing a partial HMAC, save its
state and then continue it later. Therefore, full hardware acceleration
is provided only when possible (e.g., when crypto_ahash_digest() is
called); in all other cases hardware acceleration is only partial (OPAD
and IPAD calculation is done in software, while hashing is hardware
accelerated).

Co-developed-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:36 +11:00
Declan Murphy 472b04444c crypto: keembay - Add Keem Bay OCS HCU driver
Add support for the Hashing Control Unit (HCU) included in the Offload
Crypto Subsystem (OCS) of the Intel Keem Bay SoC, thus enabling
hardware-accelerated hashing on the Keem Bay SoC for the following
algorithms:
- sha256
- sha384
- sha512
- sm3

The driver is composed of two files:

- 'ocs-hcu.c' which interacts with the hardware and abstracts it by
  providing an API following the usual paradigm used in hashing drivers
  / libraries (e.g., hash_init(), hash_update(), hash_final(), etc.).
  NOTE: this API can block and sleep, since completions are used to wait
  for the HW to complete the hashing.

- 'keembay-ocs-hcu-core.c' which exports the functionality provided by
  'ocs-hcu.c' as a ahash crypto driver. The crypto engine is used to
  provide asynchronous behavior. 'keembay-ocs-hcu-core.c' also takes
  care of the DMA mapping of the input sg list.

The driver passes crypto manager self-tests, including the extra tests
(CRYPTO_MANAGER_EXTRA_TESTS=y).

Signed-off-by: Declan Murphy <declan.murphy@intel.com>
Co-developed-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Acked-by: Mark Gross <mgross@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:36 +11:00
Declan Murphy 33ff64884c dt-bindings: crypto: Add Keem Bay OCS HCU bindings
Add device-tree bindings for the Intel Keem Bay Offload Crypto Subsystem
(OCS) Hashing Control Unit (HCU) crypto driver.

Signed-off-by: Declan Murphy <declan.murphy@intel.com>
Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Acked-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:36 +11:00
Corentin Labbe 44122cc6ee crypto: sun4i-ss - add SPDX header and remove blank lines
This patchs fixes some remaining style issue.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:36 +11:00
Corentin Labbe b1f578b85a crypto: sun4i-ss - enabled stats via debugfs
This patch enable to access usage stats for each algorithm.

Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:36 +11:00
Corentin Labbe 9bc3dd24e7 crypto: sun4i-ss - fix kmap usage
With the recent kmap change, some tests which were conditional on
CONFIG_DEBUG_HIGHMEM now are enabled by default.
This permit to detect a problem in sun4i-ss usage of kmap.

sun4i-ss uses two kmap via sg_miter (one for input, one for output), but
using two kmap at the same time is hard:
"the ordering has to be correct and with sg_miter that's probably hard to get
right." (quoting Tlgx)

So the easiest solution is to never have two sg_miter/kmap open at the same time.
After each use of sg_miter, I store the current index, for being able to
resume sg_miter to the right place.

Fixes: 6298e94821 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:36 +11:00
Corentin Labbe 4ec8977b92 crypto: sun4i-ss - initialize need_fallback
The need_fallback is never initialized and seem to be always true at runtime.
So all hardware operations are always bypassed.

Fixes: 0ae1f46c55 ("crypto: sun4i-ss - fallback when length is not multiple of blocksize")
Cc: <stable@vger.kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:36 +11:00
Corentin Labbe 5ab6177fa0 crypto: sun4i-ss - handle BigEndian for cipher
Ciphers produce invalid results on BE.
Key and IV need to be written in LE.

Fixes: 6298e94821 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Cc: <stable@vger.kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Corentin Labbe b756f1c8fc crypto: sun4i-ss - IV register does not work on A10 and A13
Allwinner A10 and A13 SoC have a version of the SS which produce
invalid IV in IVx register.

Instead of adding a variant for those, let's convert SS to produce IV
directly from data.
Fixes: 6298e94821 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Cc: <stable@vger.kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Corentin Labbe 7bdcd851fa crypto: sun4i-ss - checking sg length is not sufficient
The optimized cipher function need length multiple of 4 bytes.
But it get sometimes odd length.
This is due to SG data could be stored with an offset.

So the fix is to check also if the offset is aligned with 4 bytes.
Fixes: 6298e94821 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator")
Cc: <stable@vger.kernel.org>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Corentin Labbe 583513510a crypto: sun4i-ss - linearize buffers content must be kept
When running the non-optimized cipher function, SS produce partial random
output.
This is due to linearize buffers being reseted after each loop.

For preserving stack, instead of moving them back to start of function,
I move them in sun4i_ss_ctx.

Fixes: 8d3bcb9900 ("crypto: sun4i-ss - reduce stack usage")
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Tian Tao 7334a4be50 crypto: inside-secure - fix platform_get_irq.cocci warnings
Remove dev_err() messages after platform_get_irq*() failures.
drivers/crypto/inside-secure/safexcel.c: line 1161 is redundant
because platform_get_irq() already prints an error

Generated by: scripts/coccinelle/api/platform_get_irq.cocci

Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Acked-by: Antoine Tenart <atenart@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Ard Biesheuvel 0eb76ba29d crypto: remove cipher routines from public crypto API
The cipher routines in the crypto API are mostly intended for templates
implementing skcipher modes generically in software, and shouldn't be
used outside of the crypto subsystem. So move the prototypes and all
related definitions to a new header file under include/crypto/internal.
Also, let's use the new module namespace feature to move the symbol
exports into a new namespace CRYPTO_INTERNAL.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Ard Biesheuvel a3b01ffddc chcr_ktls: use AES library for single use cipher
Allocating a cipher via the crypto API only to free it again after using
it to encrypt a single block is unnecessary in cases where the algorithm
is known at compile time. So replace this pattern with a call to the AES
library.

Cc: Ayush Sawal <ayush.sawal@chelsio.com>
Cc: Vinay Kumar Yadav <vinay.yadav@chelsio.com>
Cc: Rohit Maheshwari <rohitm@chelsio.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Tian Tao bbfd06c7c8 crypto: ccree - remove unused including <linux/version.h>
Remove including <linux/version.h> that don't need it.

Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Fabio Estevam c4dc99e14c crypto: sahara - Remove unused .id_table support
Since 5.10-rc1 i.MX is a devicetree-only platform and the existing
.id_table support in this driver was only useful for old non-devicetree
platforms.

Remove the unused .id_table support.

Signed-off-by: Fabio Estevam <festevam@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:34 +11:00
Ard Biesheuvel 303fd3e1c7 crypto: tcrypt - avoid signed overflow in byte count
The signed long type used for printing the number of bytes processed in
tcrypt benchmarks limits the range to -/+ 2 GiB, which is not sufficient
to cover the performance of common accelerated ciphers such as AES-NI
when benchmarked with sec=1. So switch to u64 instead.

While at it, fix up a missing printk->pr_cont conversion in the AEAD
benchmark.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:34 +11:00
Ard Biesheuvel ddf169a98f crypto: aesni - implement support for cts(cbc(aes))
Follow the same approach as the arm64 driver for implementing a version
of AES-NI in CBC mode that supports ciphertext stealing. This results in
a ~2x speed increase for relatively short inputs (less than 256 bytes),
which is relevant given that AES-CBC with ciphertext stealing is used
for filename encryption in the fscrypt layer. For larger inputs, the
speedup is still significant (~25% on decryption, ~6% on encryption)

Tested-by: Eric Biggers <ebiggers@google.com> # x86_64
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:34 +11:00
Krzysztof Kozlowski a417178abc MAINTAINERS: crypto: s5p-sss: drop Kamil Konieczny
E-mails to Kamil Konieczny to his Samsung address bounce with 550 (User
unknown).  Kamil no longer takes care about Samsung S5P SSS driver so
remove the invalid email address from:
 - mailmap,
 - bindings maintainer entries,
 - maintainers entry for S5P Security Subsystem crypto accelerator.

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:34 +11:00
Vic Wu 6a702fa533 crypto: mediatek - remove obsolete driver
The crypto mediatek driver has been replaced by the inside-secure
driver now. Remove this driver to avoid having duplicate drivers.

Signed-off-by: Vic Wu <vic.wu@mediatek.com>
Acked-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:34 +11:00
Ard Biesheuvel 0aa171e9b2 crypto: ecdh - avoid buffer overflow in ecdh_set_secret()
Pavel reports that commit 17858b140b ("crypto: ecdh - avoid unaligned
accesses in ecdh_set_secret()") fixes one problem but introduces another:
the unconditional memcpy() introduced by that commit may overflow the
target buffer if the source data is invalid, which could be the result of
intentional tampering.

So check params.key_size explicitly against the size of the target buffer
before validating the key further.

Fixes: 17858b140b ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()")
Reported-by: Pavel Machek <pavel@denx.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:35:35 +11:00
Ard Biesheuvel fd16931a2f crypto: arm/chacha-neon - add missing counter increment
Commit 86cd97ec4b ("crypto: arm/chacha-neon - optimize for non-block
size multiples") refactored the chacha block handling in the glue code in
a way that may result in the counter increment to be omitted when calling
chacha_block_xor_neon() to process a full block. This violates the skcipher
API, which requires that the output IV is suitable for handling more input
as long as the preceding input has been presented in round multiples of the
block size. Also, the same code is exposed via the chacha library interface
whose callers may actually rely on this increment to occur even for final
blocks that are smaller than the chacha block size.

So increment the counter after calling chacha_block_xor_neon().

Fixes: 86cd97ec4b ("crypto: arm/chacha-neon - optimize for non-block size multiples")
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:35:35 +11:00
Linus Torvalds 5c8fe583cc Linux 5.11-rc1 2020-12-27 15:30:22 -08:00
Linus Torvalds 14e3e989f6 proc mountinfo: make splice available again
Since commit 36e2c7421f ("fs: don't allow splice read/write without
explicit ops") we've required that file operation structures explicitly
enable splice support, rather than falling back to the default handlers.

Most /proc files use the indirect 'struct proc_ops' to describe their
file operations, and were fixed up to support splice earlier in commits
40be821d627c..b24c30c67863, but the mountinfo files interact with the
VFS directly using their own 'struct file_operations' and got missed as
a result.

This adds the necessary support for splice to work for /proc/*/mountinfo
and friends.

Reported-by: Joan Bruguera Micó <joanbrugueram@gmail.com>
Reported-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=209971
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-27 12:00:36 -08:00
Linus Torvalds 52cd5f9c22 Big fix for IDT NTB and Intel NTB LTR management support
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEoE9b9c3U2JxX98mqbmZLrHqL0iMFAl/oloUACgkQbmZLrHqL
 0iOnHBAAlQmxy/OBQSeOVT4mZh954zUwUl8CTpOLraMgNh/aUGK48MxGgDNQ0k77
 WFtoKKGqdeAAyVyQmnirtWi811tbCt+wl07jorOuO79AXvx11IQEI2qA2udoexlO
 xrI/7UukVWWeOvRuP6Nbi2iJvzkuJ7h9hgyHqloBj63PNh5PSJb1u8T+48yyVvvM
 LftGsIW6FOc1Dl6ZHBnezd6mNjqsBJMyggkD2BR/QOwEAJI7mWI4ihU6fZSzSoAv
 o69V/SVMAiDzUWsFlzvOIPfNgQ4pw7HbIyS80sj4oFGL5meiuH7L7RrtMLVOKvIm
 fYmhqt+1F36NiRGIPibPjD9tgt1jCXFfh/R4ZuLldJ/vjVZxP4Bqoyhvbntum8o5
 quKq5zO/Ou1b/9f9uBzJ31/EnOqVg3nNx/i09t5KH1Knp0kfLMTPgEtdyRZbm2+V
 oQ+iCUiO5FTbWZhW+/CgM59HRSM3LtXCRateMEcSkQxEa6smCKAL4BuV9tIRN93g
 MotqKfSmvOovQC/tixxAI2SxwmdovtssrELxcvbsiqjlh3PAmp1IhA9q/yPW2g4/
 vzK+2cYLWDovdERCGPo4i+Eb838nufEXhf0OEQowkwM66V86sdCRUJLFPUJdw7l5
 3XgNWC086TXpKSP9URnRUnRPhDecdwmVotWxfXBewiNYZyY1AQE=
 =dBeS
 -----END PGP SIGNATURE-----

Merge tag 'ntb-5.11' of git://github.com/jonmason/ntb

Pull NTB fixes from Jon Mason:
 "Bug fix for IDT NTB and Intel NTB LTR management support"

* tag 'ntb-5.11' of git://github.com/jonmason/ntb:
  ntb: intel: add Intel NTB LTR vendor support for gen4 NTB
  ntb: idt: fix error check in ntb_hw_idt.c
2020-12-27 09:22:55 -08:00
Linus Torvalds 33c148a4ae Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
 "Fix a number of autobuild failures due to missing Kconfig
  dependencies"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: qat - add CRYPTO_AES to Kconfig dependencies
  crypto: keembay - Add dependency on HAS_IOMEM
  crypto: keembay - CRYPTO_DEV_KEEMBAY_OCS_AES_SM4 should depend on ARCH_KEEMBAY
2020-12-27 09:14:32 -08:00
Linus Torvalds cce622ab92 Fix a segfault that occurs when built with Clang.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl/oU8URHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1iOgw//Rb7VwweTCbPXansvkw3/lCZdQ4EHhfw9
 kWwAd+fXyGgO4SL9Beu0LgH4IADmEmDBpG5tdX6F1fufRoJKOANc1yhKisWGfxhR
 WCx+ve+0gden6Ky2hPqog3hVOcZQnyrZSCHRSSvDqV9zasIDIqSJI9UNMVQ8lb6r
 fRCwbW6++8dwy0vQVN/yU78Gi/YTEPPyP5us3WATJuvyTQtD3P8PrQyJQWucULhB
 49Tup//M/NjZRC8p5Yyhyy7YXOngox+LEjw9S8Eztlu3f0YqGlMwYmKc6FOYyl3z
 R6zoO+vKbkakXZ2qwPBVTNINQOc/5HGKf5OPl2itollMtpQIlOCNwENULmjhisoK
 k/BOtBp699GjqvRMfWKEp3WC/xV3ujQa/RKA6bi7F54G2p5cZIV2qx+/nK+MNqEq
 pWg2yqvKQEWZbA4AAUGj3Ls1lsBgs1m1Uc9gZLVwM22gkwCut5xZczXHmANZGeZY
 AdkX/AAxM4/X+u1E7DbpCUmOeylT3ig8iDJxGr56Gr06kJEoyMocDdpk+T/KdRXL
 2paPZhMS3BJWLF1Z9W0a0fT36F9Q0FU9dvp3UXBv/iuslIHRfBw5PoNqUlDNGb6h
 rHHyAvlt210xsZrVTTcHbJzR+xzR0AkEd1C0/g2yUitp3yORC0BgsjYRmew20iEI
 aJ/vstOYgi4=
 =xfnP
 -----END PGP SIGNATURE-----

Merge tag 'objtool-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull objtool fix from Ingo Molnar:
 "Fix a segfault that occurs when built with Clang"

* tag 'objtool-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  objtool: Fix seg fault with Clang non-section symbols
2020-12-27 09:08:23 -08:00
Linus Torvalds 6be5f58215 Misc fixes/updates:
- Fix static keys usage in module __init sections
 - Add separate MAINTAINERS entry for static branches/calls
 - Fix lockdep splat with CONFIG_PREEMPTIRQ_EVENTS=y tracing
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl/oWJ0RHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1hdtRAAsmmi7b8Di3ANfkJPRWyikPETdOn2bZA6
 dNaXSmRC0vPrngfoPgJ1A0iqIMgnZyeZs907qeB/vV9/EXa9zkFRzLvYL307lVnN
 sJo5kx7PLOGdGtQ1jcbDO2QC4INPD0PLlMr3wnAVF5EycX+geux/Vc/R2ZLB1pkC
 BzrA4u2V1P+DCbstNAX4b0SwAGSdvBWLFNSpXBbQrPMd9umdoL6uS9mF+zOf/0+q
 5r7Hc+41t7COBpHumiPdF7IrLTKP8bostMQalUu41GkD5G4vhoQUGw2edRirHE3X
 GGZbDmXo1FqU9q6qHaWhwugYbMoaMH3LTyOYTpW5xnDH1huXyXhaX2385V+aT/Ts
 g64SrLrAwJ9lokZwDUqORfxi6pi8mfNzwCQ49fKKjBC606bEKE1tjW1b7KkHxIWe
 wLCcdhZA4QuSN5F9XT3NyiQQgDLcReSA3WjA6T6QF26q5x5hmvuQqP+gmoqRthw1
 YXQ9lox3u4bLfqlvBMpJFhrCGBC2afyQyH18Xy7lsI1qfoktwvgoUEsObpb52U/G
 v/Y4sl6MnZJHJPik3yTdD+/EtdLPvMRPn9b/wMS3B+JSlP1pv1L03Nklz82tcMs/
 BoeKtdynRlKd3Sw4o3K44c8WjJ1ARXtDhLVUMDiILYnOVsT0B1xkdJkwLGOlhe3A
 hMuhh41TFtI=
 =fG6D
 -----END PGP SIGNATURE-----

Merge tag 'locking-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking fixes from Ingo Molnar:
 "Misc fixes/updates:

   - Fix static keys usage in module __init sections

   - Add separate MAINTAINERS entry for static branches/calls

   - Fix lockdep splat with CONFIG_PREEMPTIRQ_EVENTS=y tracing"

* tag 'locking-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  softirq: Avoid bad tracing / lockdep interaction
  jump_label/static_call: Add MAINTAINERS
  jump_label: Fix usage in module __init
2020-12-27 09:06:10 -08:00
Linus Torvalds 2eeefc60ad Update/fix two CPU sanity checks in the hotplug and the boot code,
and fix a typo in the Kconfig help text.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl/oUUkRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1gEnQ//Ud8SF9fOq2w7IVqYYQadH9BSj6jFOTlb
 5/pgRKZLI6OCRZ8Wkx9lV3hvlmyZRtKhuLcOx/LR/TXNBlIN7wfkRcjPFIojhqM3
 G5mr/TMmbL+PSOz8gEWwPsz7A+pDgoDDdvgId3Dn6CUKEZLIcf2tEzIOBjtST94b
 svS5Y/wBe3xzlBD4yfNKkmwSYlgpDv7ZfAF3q38h3dTgb3nTYRgfKmflKe+aBf7z
 iy05I/j7Hw8WmioG0oEUfX9t/j9dYdGzbK/3S8UV4igXj+i+XnzUIs9+dQpEpByS
 tgS9KJbq8AYtuUdHU7xInG2ltrODrriJUdCQpzi65/jEUshoBLw+Dj3fmVKHJA48
 LW0jgQ7eJm8dkkJyo7+s4Z5wTmS6zzqJ+2MZq34fvlHQFzHgJ3KAwTzoiqBLx4FY
 1H/KEmK8hX2IDiJj5qrWHcZRmGiKbeKxiJkB0+6EQGj+aAVy3A6AR2TeQJf58OSN
 nc+e7UWQFEumGyEN8cVTjdTuLJc1iY9ULpXuQfdi4ksDCpLMTMWk9V9psWB+pC15
 MTqbsJCzJU4oeeYaZVWdMPcaZSjgg4Ar3ojvXvPoUkdqCuvnMPoBxTVXjkwNReZF
 SxFIpYyKbSkZiN/FOZkICIh0tRMWLmMVNqXV0rjEi41BvcoH9tKXT2y5WgQEHvf1
 BctV+BFfblU=
 =T0Y6
 -----END PGP SIGNATURE-----

Merge tag 'timers-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull timer fixes from Ingo Molnar:
 "Update/fix two CPU sanity checks in the hotplug and the boot code, and
  fix a typo in the Kconfig help text.

  [ Context: the first two commits are the result of an ongoing
    annotation+review work of (intentional) tick_do_timer_cpu() data
    races reported by KCSAN, but the annotations aren't fully cooked
    yet ]"

* tag 'timers-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  timekeeping: Fix spelling mistake in Kconfig "fullfill" -> "fulfill"
  tick/sched: Remove bogus boot "safety" check
  tick: Remove pointless cpu valid check in hotplug code
2020-12-27 09:03:41 -08:00
Linus Torvalds 3b80dee70e Fix a context switch performance regression.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl/oTrQRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1gFcBAAtVljuMTvy9RhyX5s+Q8XEa81+iSTckht
 gdd26WbfGBmKMqEXdKtwlG+ZwPHBzHKIipy4thSb7B0SbuYpiyBOlir6aGwpQZD5
 puMRCVuGzyoW02oGExGnuEOteNUQ+hyj6z351G6R0152Tp/5WPZSM8Wvr745Pjkb
 mmAx3VELRRoq0q4ecz/MUHiZ+XVGpN/rbMj1O9hm5RFdUQHROFqwxAIJ7Hnan3v9
 fSOiFRVTNtIflvIHhR8w052pPx/5Sg+UNi/T8n6gSP5WeKamTEPIs/q6nROgX9Qm
 4SEK8PM0epkhVhoLzKNgaP7GpXYKTpifZ/04Y6QZ5sRveo7tHvlNVQvE+uN82ARm
 SFmJvhbrHi00CRdYmOOERivOJahkNrEgsJTj5Nd/kmno92lkBv5S/+hHl2JEtLDb
 P2d3GWh+8aUEFUh+VA73Z4SoCaVA/VlzErdCm4EBY/efu3fFhKafCcs/nh3gQ9cU
 KK5gBWFt/pG3EDPH6d89d/O7akZcOjnB6jelaUbVxtbG/xCO8uh2RZ16gV1Bvvnn
 gqjNTXolY9jeFCt9FB+Tg3cxRbITEiqivr7nG7KluiWdsdujEV05OkpOegQCkq74
 HE/UzH2GZzoVHYKm6rBOlOuMDV77ClE8vrmOKz4sb4oquXHkr/78uBaScHcIRG4c
 nap1c0DJ4nc=
 =soZP
 -----END PGP SIGNATURE-----

Merge tag 'sched-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler fix from Ingo Molnar:
 "Fix a context switch performance regression"

* tag 'sched-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched: Optimize finish_lock_switch()
2020-12-27 09:00:47 -08:00
Linus Torvalds f838f8d2b6 mfd: ab8500-debugfs: Remove extraneous seq_putc
Commit c9a3c4e637 ("mfd: ab8500-debugfs: Remove extraneous curly
brace") removed a left-over curly brace that caused build failures, but
Joe Perches points out that the subsequent 'seq_putc()' should also be
removed, because the commit that caused all these problems already added
the final '\n' to the seq_printf() above it.

Reported-by: Joe Perches <joe@perches.com>
Fixes: 886c812165 ("mfd: ab8500-debugfs: Remove the racy fiddling with irq_desc")
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-26 09:19:49 -08:00