Граф коммитов

20 Коммитов

Автор SHA1 Сообщение Дата
Eric Biggers c4741b2305 crypto: run initcalls for generic implementations earlier
Use subsys_initcall for registration of all templates and generic
algorithm implementations, rather than module_init.  Then change
cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

This is needed so that when both a generic and optimized implementation
of an algorithm are built into the kernel (not loadable modules), the
generic implementation is registered before the optimized one.
Otherwise, the self-tests for the optimized implementation are unable to
allocate the generic implementation for the new comparison fuzz tests.

Note that on arm, a side effect of this change is that self-tests for
generic implementations may run before the unaligned access handler has
been installed.  So, unaligned accesses will crash the kernel.  This is
arguably a good thing as it makes it easier to detect that type of bug.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18 22:15:03 +08:00
Harsh Jain a777336362 crypto: authencesn - Avoid twice completion call in decrypt path
Authencesn template in decrypt path unconditionally calls aead_request_complete
after ahash_verify which leads to following kernel panic in after decryption.

[  338.539800] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
[  338.548372] PGD 0 P4D 0
[  338.551157] Oops: 0000 [#1] SMP PTI
[  338.554919] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: G        W I       4.19.7+ #13
[  338.564431] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.0        07/29/10
[  338.572212] RIP: 0010:esp_input_done2+0x350/0x410 [esp4]
[  338.578030] Code: ff 0f b6 68 10 48 8b 83 c8 00 00 00 e9 8e fe ff ff 8b 04 25 04 00 00 00 83 e8 01 48 98 48 8b 3c c5 10 00 00 00 e9 f7 fd ff ff <8b> 04 25 04 00 00 00 83 e8 01 48 98 4c 8b 24 c5 10 00 00 00 e9 3b
[  338.598547] RSP: 0018:ffff911c97803c00 EFLAGS: 00010246
[  338.604268] RAX: 0000000000000002 RBX: ffff911c4469ee00 RCX: 0000000000000000
[  338.612090] RDX: 0000000000000000 RSI: 0000000000000130 RDI: ffff911b87c20400
[  338.619874] RBP: 0000000000000000 R08: ffff911b87c20498 R09: 000000000000000a
[  338.627610] R10: 0000000000000001 R11: 0000000000000004 R12: 0000000000000000
[  338.635402] R13: ffff911c89590000 R14: ffff911c91730000 R15: 0000000000000000
[  338.643234] FS:  0000000000000000(0000) GS:ffff911c97800000(0000) knlGS:0000000000000000
[  338.652047] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  338.658299] CR2: 0000000000000004 CR3: 00000001ec20a000 CR4: 00000000000006f0
[  338.666382] Call Trace:
[  338.669051]  <IRQ>
[  338.671254]  esp_input_done+0x12/0x20 [esp4]
[  338.675922]  chcr_handle_resp+0x3b5/0x790 [chcr]
[  338.680949]  cpl_fw6_pld_handler+0x37/0x60 [chcr]
[  338.686080]  chcr_uld_rx_handler+0x22/0x50 [chcr]
[  338.691233]  uldrx_handler+0x8c/0xc0 [cxgb4]
[  338.695923]  process_responses+0x2f0/0x5d0 [cxgb4]
[  338.701177]  ? bitmap_find_next_zero_area_off+0x3a/0x90
[  338.706882]  ? matrix_alloc_area.constprop.7+0x60/0x90
[  338.712517]  ? apic_update_irq_cfg+0x82/0xf0
[  338.717177]  napi_rx_handler+0x14/0xe0 [cxgb4]
[  338.722015]  net_rx_action+0x2aa/0x3e0
[  338.726136]  __do_softirq+0xcb/0x280
[  338.730054]  irq_exit+0xde/0xf0
[  338.733504]  do_IRQ+0x54/0xd0
[  338.736745]  common_interrupt+0xf/0xf

Fixes: 104880a6b4 ("crypto: authencesn - Convert to new AEAD...")
Signed-off-by: Harsh Jain <harsh@chelsio.com>
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-01-10 21:37:31 +08:00
Kees Cook 8d60539842 crypto: null - Remove VLA usage of skcipher
In the quest to remove all stack VLA usage from the kernel[1], this
replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage
with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(),
which uses a fixed stack size.

[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-28 12:46:08 +08:00
Tudor-Dan Ambarus 31545df391 crypto: authencesn - don't leak pointers to authenc keys
In crypto_authenc_esn_setkey we save pointers to the authenc keys
in a local variable of type struct crypto_authenc_keys and we don't
zeroize it after use. Fix this and don't leak pointers to the
authenc keys.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-04-21 00:58:30 +08:00
Eric Biggers 3a2d4fb51e crypto: null - Get rid of crypto_{get,put}_default_null_skcipher2()
Since commit 499a66e6b6 ("crypto: null - Remove default null
blkcipher"), crypto_get_default_null_skcipher2() and
crypto_put_default_null_skcipher2() are the same as their non-2
equivalents.  So switch callers of the "2" versions over to the original
versions and remove the "2" versions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-12-22 19:29:08 +11:00
Herbert Xu 41cdf7a453 crypto: authencesn - Fix digest_null crash
When authencesn is used together with digest_null a crash will
occur on the decrypt path.  This is because normally we perform
a special setup to preserve the ESN, but this is skipped if there
is no authentication.  However, on the post-authentication path
it always expects the preservation to be in place, thus causing
a crash when digest_null is used.

This patch fixes this by also skipping the post-processing when
there is no authentication.

Fixes: 104880a6b4 ("crypto: authencesn - Convert to new AEAD...")
Cc: <stable@vger.kernel.org>
Reported-by: Jan Tluka <jtluka@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-07-18 17:01:11 +08:00
Eric Biggers 60425a8bad crypto: skcipher - Get rid of crypto_spawn_skcipher2()
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_spawn_skcipher2() and
crypto_spawn_skcipher() are equivalent.  So switch callers of
crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-01 08:37:17 +08:00
Eric Biggers a35528eca0 crypto: skcipher - Get rid of crypto_grab_skcipher2()
Since commit 3a01d0ee2b ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_grab_skcipher2() and
crypto_grab_skcipher() are equivalent.  So switch callers of
crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-11-01 08:37:16 +08:00
Herbert Xu e75445a844 crypto: authencesn - Use skcipher
This patch converts authencesn to use the new skcipher interface as
opposed to ablkcipher.

It also fixes a little bug where if a sync version of authencesn
is requested we may still end up using an async ahash.  This should
have no effect as none of the authencesn users can request for a
sync authencesn.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18 17:35:39 +08:00
Herbert Xu 927ef32dcc crypto: authenc - Consider ahash ASYNC bit
As it is, if you get an async ahash with a sync skcipher you'll
end up with a sync authenc, which is wrong.

This patch fixes it by considering the ASYNC bit from ahash as
well.

It also fixes a little bug where if a sync version of authenc
is requested we may still end up using an async ahash.

Neither of them should have any effect as none of the authenc
users can request for a sync authenc.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01 23:45:02 +08:00
Herbert Xu 5e4b8c1fcc crypto: aead - Remove CRYPTO_ALG_AEAD_NEW flag
This patch removes the CRYPTO_ALG_AEAD_NEW flag now that everyone
has been converted.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-08-17 16:53:53 +08:00
Herbert Xu 104880a6b4 crypto: authencesn - Convert to new AEAD interface
This patch converts authencesn to the new AEAD interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-08-10 23:20:13 +08:00
Herbert Xu 443c0d7ed9 crypto: authencesn - Fix breakage with new ESP code
The ESP code has been updated to generate a completely linear
AD SG list.  This unfortunately broke authencesn which expects
the AD to be divided into at least three parts.

This patch fixes it to cope with the new format.  Later we will
fix it properly to accept arbitrary input and not rely on the
input being linear as part of the AEAD conversion.

Fixes: 7021b2e1cd ("esp4: Switch to new AEAD interface")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-08-10 23:13:51 +08:00
Herbert Xu 6650d09b3a crypto: authencesn - Use crypto_aead_set_reqsize helper
This patch uses the crypto_aead_set_reqsize helper to avoid directly
touching the internals of aead.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-05-13 10:31:38 +08:00
Herbert Xu 7fb2a4bdce crypto: authencesn - Include internal/aead.h
All AEAD implementations must include internal/aead.h in order
to access required helpers.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-05-13 10:31:26 +08:00
Kees Cook 4943ba16bb crypto: include crypto- module prefix in template
This adds the module loading prefix "crypto-" to the template lookup
as well.

For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
includes the "crypto-" prefix at every level, correctly rejecting "vfat":

	net-pf-38
	algif-hash
	crypto-vfat(blowfish)
	crypto-vfat(blowfish)-all
	crypto-vfat

Reported-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-11-26 20:06:30 +08:00
Mathias Krause fddc2c43c4 crypto: authencesn - Simplify key parsing
Use the common helper function crypto_authenc_extractkeys() for key
parsing.

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-10-16 20:56:25 +08:00
James Yonan 6bf37e5aa9 crypto: crypto_memneq - add equality testing of memory regions w/o timing leaks
When comparing MAC hashes, AEAD authentication tags, or other hash
values in the context of authentication or integrity checking, it
is important not to leak timing information to a potential attacker,
i.e. when communication happens over a network.

Bytewise memory comparisons (such as memcmp) are usually optimized so
that they return a nonzero value as soon as a mismatch is found. E.g,
on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
and up to ~850 cyc for a full match (cold). This early-return behavior
can leak timing information as a side channel, allowing an attacker to
iteratively guess the correct result.

This patch adds a new method crypto_memneq ("memory not equal to each
other") to the crypto API that compares memory areas of the same length
in roughly "constant time" (cache misses could change the timing, but
since they don't reveal information about the content of the strings
being compared, they are effectively benign). Iow, best and worst case
behaviour take the same amount of time to complete (in contrast to
memcmp).

Note that crypto_memneq (unlike memcmp) can only be used to test for
equality or inequality, NOT for lexicographical order. This, however,
is not an issue for its use-cases within the crypto API.

We tried to locate all of the places in the crypto API where memcmp was
being used for authentication or integrity checking, and convert them
over to crypto_memneq.

crypto_memneq is declared noinline, placed in its own source file,
and compiled with optimizations that might increase code size disabled
("Os") because a smart compiler (or LTO) might notice that the return
value is always compared against zero/nonzero, and might then
reintroduce the same early-return optimization that we are trying to
avoid.

Using #pragma or __attribute__ optimization annotations of the code
for disabling optimization was avoided as it seems to be considered
broken or unmaintained for long time in GCC [1]. Therefore, we work
around that by specifying the compile flag for memneq.o directly in
the Makefile. We found that this seems to be most appropriate.

As we use ("Os"), this patch also provides a loop-free "fast-path" for
frequently used 16 byte digests. Similarly to kernel library string
functions, leave an option for future even further optimized architecture
specific assembler implementations.

This was a joint work of James Yonan and Daniel Borkmann. Also thanks
for feedback from Florian Weimer on this and earlier proposals [2].

  [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
  [2] https://lkml.org/lkml/2013/2/10/131

Signed-off-by: James Yonan <james@openvpn.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-10-07 14:17:06 +08:00
Julia Lawall 3e8afe35c3 crypto: use ERR_CAST
Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise.

The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)

// <smpl>
@@
expression err,x;
@@
-       err = PTR_ERR(x);
        if (IS_ERR(x))
-                return ERR_PTR(err);
+                return ERR_CAST(x);
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-02-04 21:16:53 +08:00
Steffen Klassert a5079d084f crypto: authencesn - Add algorithm to handle IPsec extended sequence numbers
ESP with separate encryption/authentication algorithms needs a special
treatment for the associated data. This patch add a new algorithm that
handles esp with extended sequence numbers.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-03-13 20:22:27 -07:00