Граф коммитов

62 Коммитов

Автор SHA1 Сообщение Дата
Herbert Xu 4989d4f07a crypto: api - Remove unused crypto_type lookup function
The lookup function in crypto_type was only used for the implicit
IV generators which have been completely removed from the crypto
API.

This patch removes the lookup function as it is now useless.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-31 01:32:57 +08:00
Ard Biesheuvel 45fe93dff2 crypto: algapi - make crypto_xor() take separate dst and src arguments
There are quite a number of occurrences in the kernel of the pattern

  if (dst != src)
          memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
  crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);

or

  crypto_xor(keystream, src, nbytes);
  memcpy(dst, keystream, nbytes);

where crypto_xor() is preceded or followed by a memcpy() invocation
that is only there because crypto_xor() uses its output parameter as
one of the inputs. To avoid having to add new instances of this pattern
in the arm64 code, which will be refactored to implement non-SIMD
fallbacks, add an alternative implementation called crypto_xor_cpy(),
taking separate input and output arguments. This removes the need for
the separate memcpy().

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-04 09:27:15 +08:00
Ard Biesheuvel a7c391f04f crypto: algapi - use separate dst and src operands for __crypto_xor()
In preparation of introducing crypto_xor_cpy(), which will use separate
operands for input and output, modify the __crypto_xor() implementation,
which it will share with the existing crypto_xor(), which provides the
actual functionality when not using the inline version.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-08-04 09:27:05 +08:00
Herbert Xu 016df0abc5 crypto: api - Add crypto_requires_off helper
This patch adds crypto_requires_off which is an extension of
crypto_requires_sync for similar bits such as NEED_FALLBACK.

Cc: stable@vger.kernel.org #4.10
Suggested-by: Marcelo Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-27 18:09:39 +08:00
Ard Biesheuvel db91af0fbe crypto: algapi - make crypto_xor() and crypto_inc() alignment agnostic
Instead of unconditionally forcing 4 byte alignment for all generic
chaining modes that rely on crypto_xor() or crypto_inc() (which may
result in unnecessary copying of data when the underlying hardware
can perform unaligned accesses efficiently), make those functions
deal with unaligned input explicitly, but only if the Kconfig symbol
HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

For crypto_inc(), this simply involves making the 4-byte stride
conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
it typically operates on 16 byte buffers.

For crypto_xor(), an algorithm is implemented that simply runs through
the input using the largest strides possible if unaligned accesses are
allowed. If they are not, an optimal sequence of memory accesses is
emitted that takes the relative alignment of the input buffers into
account, e.g., if the relative misalignment of dst and src is 4 bytes,
the entire xor operation will be completed using 4 byte loads and stores
(modulo unaligned bits at the start and end). Note that all expressions
involving misalign are simply eliminated by the compiler when
HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2017-02-11 17:52:28 +08:00
Corentin LABBE 2589ad8404 crypto: engine - move crypto engine to its own header
This patch move the whole crypto engine API to its own header
crypto/engine.h.

Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-09-07 21:08:26 +08:00
Herbert Xu 4140139734 crypto: api - Optimise away crypto_yield when hard preemption is on
When hard preemption is enabled there is no need to explicitly
call crypto_yield.  This patch eliminates it if that is the case.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18 17:35:49 +08:00
Herbert Xu 32f27c745c crypto: api - Add crypto_inst_setname
This patch adds the helper crypto_inst_setname because the current
helper crypto_alloc_instance2 is no longer useful given that we
now look up the algorithm after we allocate the instance object.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01 23:45:11 +08:00
Herbert Xu 8965450987 crypto: hash - Remove crypto_hash interface
This patch removes all traces of the crypto_hash interface, now
that everyone has switched over to shash or ahash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-02-06 15:33:20 +08:00
Baolin Wang 735d37b542 crypto: engine - Introduce the block request crypto engine framework
Now block cipher engines need to implement and maintain their own queue/thread
for processing requests, moreover currently helpers provided for only the queue
itself (in crypto_enqueue_request() and crypto_dequeue_request()) but they
don't help with the mechanics of driving the hardware (things like running the
request immediately, DMA map it or providing a thread to process the queue in)
even though a lot of that code really shouldn't vary that much from device to
device.

Thus this patch provides a mechanism for pushing requests to the hardware
as it becomes free that drivers could use. And this framework is patterned
on the SPI code and has worked out well there.
(https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/
 drivers/spi/spi.c?id=ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0)

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-02-01 22:27:02 +08:00
Baolin Wang 9f93a8a0ba crypto: api - Introduce crypto_queue_len() helper function
This patch introduces crypto_queue_len() helper function to help to get the
queue length in the crypto queue list now.

Signed-off-by: Baolin Wang <baolin.wang@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-02-01 22:26:59 +08:00
Herbert Xu 319382a697 crypto: api - Add instance free function to crypto_type
Currently the task of freeing an instance is given to the crypto
template.  However, it has no type information on the instance so
we have to resort to checking type information at runtime.

This patch introduces a free function to crypto_type that will be
used to free an instance.  This can then be used to free an instance
in a type-safe manner.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-14 14:56:45 +08:00
Herbert Xu 31d228cc64 crypto: api - Remove unused __crypto_dequeue_request
The function __crypto_dequeue_request is completely unused.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-07-14 14:56:44 +08:00
Herbert Xu 5d1d65f8be crypto: aead - Convert top level interface to new style
This patch converts the top-level aead interface to the new style.
All user-level AEAD interface code have been moved into crypto/aead.h.

The allocation/free functions have switched over to the new way of
allocating tfms.

This patch also removes the double indrection on setkey so the
indirection now exists only at the alg level.

Apart from these there are no user-visible changes.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-05-13 10:31:53 +08:00
Herbert Xu d6ef2f198d crypto: api - Add crypto_grab_spawn primitive
This patch adds a new primitive crypto_grab_spawn which is meant
to replace crypto_init_spawn and crypto_init_spawn2.  Under the
new scheme the user no longer has to worry about reference counting
the alg object before it is subsumed by the spawn.

It is pretty much an exact copy of crypto_grab_aead.

Prior to calling this function spawn->frontend and spawn->inst
must have been set.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-05-13 10:31:25 +08:00
Herbert Xu 87b1675634 crypto: api - Change crypto_unregister_instance argument type
This patch makes crypto_unregister_instance take a crypto_instance
instead of a crypto_alg.  This allows us to remove a duplicate
CRYPTO_ALG_INSTANCE check in crypto_unregister_instance.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-04-03 17:53:32 +08:00
Marek Vasut bb55a4c100 crypto: api - Move crypto_yield() to algapi.h
It makes no sense for crypto_yield() to be defined in scatterwalk.h ,
move it into algapi.h as it's an internal function to crypto API.

Signed-off-by: Marek Vasut <marex@denx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20 21:26:04 +08:00
Ard Biesheuvel 4f7f1d7cff crypto: allow blkcipher walks over AEAD data
This adds the function blkcipher_aead_walk_virt_block, which allows the caller
to use the blkcipher walk API to handle the input and output scatterlists.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-10 20:17:11 +08:00
Ard Biesheuvel 822be00fe6 crypto: remove direct blkcipher_walk dependency on transform
In order to allow other uses of the blkcipher walk API than the blkcipher
algos themselves, this patch copies some of the transform data members to the
walk struct so the transform is only accessed at walk init time.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-10 20:17:10 +08:00
James Yonan 6bf37e5aa9 crypto: crypto_memneq - add equality testing of memory regions w/o timing leaks
When comparing MAC hashes, AEAD authentication tags, or other hash
values in the context of authentication or integrity checking, it
is important not to leak timing information to a potential attacker,
i.e. when communication happens over a network.

Bytewise memory comparisons (such as memcmp) are usually optimized so
that they return a nonzero value as soon as a mismatch is found. E.g,
on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
and up to ~850 cyc for a full match (cold). This early-return behavior
can leak timing information as a side channel, allowing an attacker to
iteratively guess the correct result.

This patch adds a new method crypto_memneq ("memory not equal to each
other") to the crypto API that compares memory areas of the same length
in roughly "constant time" (cache misses could change the timing, but
since they don't reveal information about the content of the strings
being compared, they are effectively benign). Iow, best and worst case
behaviour take the same amount of time to complete (in contrast to
memcmp).

Note that crypto_memneq (unlike memcmp) can only be used to test for
equality or inequality, NOT for lexicographical order. This, however,
is not an issue for its use-cases within the crypto API.

We tried to locate all of the places in the crypto API where memcmp was
being used for authentication or integrity checking, and convert them
over to crypto_memneq.

crypto_memneq is declared noinline, placed in its own source file,
and compiled with optimizations that might increase code size disabled
("Os") because a smart compiler (or LTO) might notice that the return
value is always compared against zero/nonzero, and might then
reintroduce the same early-return optimization that we are trying to
avoid.

Using #pragma or __attribute__ optimization annotations of the code
for disabling optimization was avoided as it seems to be considered
broken or unmaintained for long time in GCC [1]. Therefore, we work
around that by specifying the compile flag for memneq.o directly in
the Makefile. We found that this seems to be most appropriate.

As we use ("Os"), this patch also provides a loop-free "fast-path" for
frequently used 16 byte digests. Similarly to kernel library string
functions, leave an option for future even further optimized architecture
specific assembler implementations.

This was a joint work of James Yonan and Daniel Borkmann. Also thanks
for feedback from Florian Weimer on this and earlier proposals [2].

  [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
  [2] https://lkml.org/lkml/2013/2/10/131

Signed-off-by: James Yonan <james@openvpn.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-10-07 14:17:06 +08:00
Steffen Klassert ce3fd840f5 crypto: Unlink and free instances when deleted
We leak the crypto instance when we unregister an instance with
crypto_del_alg(). Therefore we introduce crypto_unregister_instance()
to unlink the crypto instance from the template's instances list and
to free the recources of the instance properly.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-11-09 12:04:06 +08:00
Steffen Klassert b6aa63c09b crypto: Add a report function pointer to crypto_type
We add a report function pointer to struct crypto_type. This function
pointer is used from the crypto userspace configuration API to report
crypto algorithms to userspace.

Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-10-21 14:24:03 +02:00
David S. Miller bf06099db1 crypto: skcipher - Add ablkcipher_walk interfaces
These are akin to the blkcipher_walk helpers.

The main differences in the async variant are:

1) Only physical walking is supported.  We can't hold on to
   kmap mappings across the async operation to support virtual
   ablkcipher_walk operations anyways.

2) Bounce buffers used for async more need to be persistent and
   freed at a later point in time when the async op completes.
   Therefore we maintain a list of writeback buffers and require
   that the ablkcipher_walk user call the 'complete' operation
   so we can copy the bounce buffers out to the real buffers and
   free up the bounce buffer chunks.

These interfaces will be used by the new Niagara2 crypto driver.

Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2010-05-19 14:13:07 +10:00
Benjamin Gilbert 2141b6309b crypto: hash - Remove legacy hash/digest code
6941c3a0 disabled compilation of the legacy digest code but didn't
actually remove it.  Rectify this.  Also, remove the crypto_hash_type
extern declaration from algapi.h now that the struct is gone.

Signed-off-by: Benjamin Gilbert <bgilbert@cs.cmu.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-10-19 12:53:37 +09:00
Linus Torvalds 332a339218 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
  crypto: sha-s390 - Fix warnings in import function
  crypto: vmac - New hash algorithm for intel_txt support
  crypto: api - Do not displace newly registered algorithms
  crypto: ansi_cprng - Fix module initialization
  crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
  crypto: fips - Depend on ansi_cprng
  crypto: blkcipher - Do not use eseqiv on stream ciphers
  crypto: ctr - Use chainiv on raw counter mode
  Revert crypto: fips - Select CPRNG
  crypto: rng - Fix typo
  crypto: talitos - add support for 36 bit addressing
  crypto: talitos - align locks on cache lines
  crypto: talitos - simplify hmac data size calculation
  crypto: mv_cesa - Add support for Orion5X crypto engine
  crypto: cryptd - Add support to access underlaying shash
  crypto: gcm - Use GHASH digest algorithm
  crypto: ghash - Add GHASH digest algorithm for GCM
  crypto: authenc - Convert to ahash
  crypto: api - Fix aligned ctx helper
  crypto: hmac - Prehash ipad/opad
  ...
2009-09-11 09:38:37 -07:00
Herbert Xu 0c7d400faf crypto: skcipher - Fix skcipher_dequeue_givcrypt NULL test
As struct skcipher_givcrypt_request includes struct crypto_request
at a non-zero offset, testing for NULL after converting the pointer
returned by crypto_dequeue_request does not work.  This can result
in IPsec crashes when the queue is depleted.

This patch fixes it by doing the pointer conversion only when the
return value is non-NULL.  In particular, we create a new function
__crypto_dequeue_request that does the pointer conversion.

Reported-by: Brad Bosch <bradbosch@comcast.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-08-29 20:44:04 +10:00
Herbert Xu ab30046567 crypto: api - Fix aligned ctx helper
The aligned ctx helper was using a bogus alignment value thas was
one off the correct value.  Fortunately the current users do not
require anything beyond the natural alignment of the platform so
this hasn't caused a problem.

This patch fixes that and also removes the unnecessary minimum
check since if the alignment is less than the natural alignment
then the subsequent ALIGN operation should be a noop.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-24 15:26:15 +08:00
Herbert Xu 9cd899a32f crypto: cryptd - Switch to template create API
This patch changes cryptd to use the template->create function
instead of alloc in anticipation for the switch to new style
ahash algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 18:45:45 +08:00
Herbert Xu 2ca33da1de crypto: api - Remove frontend argument from extsize/init_tfm
As the extsize and init_tfm functions belong to the frontend the
frontend argument is superfluous.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-14 12:58:15 +08:00
Herbert Xu d06854f024 crypto: api - Add crypto_attr_alg2 helper
This patch adds the helper crypto_attr_alg2 which is similar to
crypto_attr_alg but takes an extra frontend argument.  This is
intended to be used by new style algorithm types such as shash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 18:58:29 +08:00
Herbert Xu 97eedce1a6 crypto: api - Add new style spawn support
This patch modifies the spawn infrastructure to support new style
algorithms like shash.  In particular, this means storing the
frontend type in the spawn and using crypto_create_tfm to allocate
the tfm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-08 18:57:10 +08:00
Herbert Xu 70ec7bb91a crypto: api - Add crypto_alloc_instance2
This patch adds a new argument to crypto_alloc_instance which
sets aside some space before the instance for use by algorithms
such as shash that place type-specific data before crypto_alg.

For compatibility the function has been renamed so that existing
users aren't affected.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-07 18:55:28 +08:00
Herbert Xu f2ac72e826 crypto: api - Add new template create function
This patch introduces the template->create function intended
to replace the existing alloc function.  The intention is for
create to handle the registration directly, whereas currently
the caller of alloc has to handle the registration.

This allows type-specific code to be run prior to registration.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2009-07-07 12:30:33 +08:00
Herbert Xu 5f7082ed4f crypto: hash - Export shash through hash
This patch allows shash algorithms to be used through the old hash
interface.  This is a transitional measure so we can convert the
underlying algorithms to shash before converting the users across.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-12-25 11:01:33 +11:00
Herbert Xu 7b0bac64cd crypto: api - Rebirth of crypto_alloc_tfm
This patch reintroduces a completely revamped crypto_alloc_tfm.
The biggest change is that we now take two crypto_type objects
when allocating a tfm, a frontend and a backend.  In fact this
simply formalises what we've been doing behind the API's back.

For example, as it stands crypto_alloc_ahash may use an
actual ahash algorithm or a crypto_hash algorithm.  Putting
this in the API allows us to do this much more cleanly.

The existing types will be converted across gradually.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-12-25 11:01:24 +11:00
Herbert Xu 4a7794860b crypto: api - Move type exit function into crypto_tfm
The type exit function needs to undo any allocations done by the type
init function.  However, the type init function may differ depending
on the upper-level type of the transform (e.g., a crypto_blkcipher
instantiated as a crypto_ablkcipher).

So we need to move the exit function out of the lower-level
structure and into crypto_tfm itself.

As it stands this is a no-op since nobody uses exit functions at
all.  However, all cases where a lower-level type is instantiated
as a different upper-level type (such as blkcipher as ablkcipher)
will be converted such that they allocate the underlying transform
and use that instead of casting (e.g., crypto_ablkcipher casted
into crypto_blkcipher).  That will need to use a different exit
function depending on the upper-level type.

This patch also allows the type init/exit functions to call (or not)
cra_init/cra_exit instead of always calling them from the top level.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-12-25 11:01:23 +11:00
Herbert Xu 45d44eb56a [CRYPTO] skcipher: Remove crypto_spawn_ablkcipher
Now that gcm and authenc have been converted to crypto_spawn_skcipher,
this patch removes the obsolete crypto_spawn_ablkcipher function.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:48 +11:00
Herbert Xu 378f4f51f9 [CRYPTO] skcipher: Add crypto_grab_skcipher interface
Note: From now on the collective of ablkcipher/blkcipher/givcipher will
be known as skcipher, i.e., symmetric key cipher.  The name blkcipher has
always been much of a misnomer since it supports stream ciphers too.

This patch adds the function crypto_grab_skcipher as a new way of getting
an ablkcipher spawn.  The problem is that previously we did this in two
steps, first getting the algorithm and then calling crypto_init_spawn.

This meant that each spawn user had to be aware of what type and mask to
use for these two steps.  This is difficult and also presents a problem
when the type/mask changes as they're about to be for IV generators.

The new interface does both steps together just like crypto_alloc_ablkcipher.

As a side-effect this also allows us to be stronger on type enforcement
for spawns.  For now this is only done for ablkcipher but it's trivial
to extend for other types.

This patch also moves the type/mask logic for skcipher into the helpers
crypto_skcipher_type and crypto_skcipher_mask.

Finally this patch introduces the function crypto_require_sync to determine
whether the user is specifically requesting a sync algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:42 +11:00
Herbert Xu 68b6c7d691 [CRYPTO] api: Add crypto_attr_alg_name
This patch adds a new helper crypto_attr_alg_name which is basically the
first half of crypto_attr_alg.  That is, it returns an algorithm name
parameter as a string without looking it up.  The caller can then look it
up immediately or defer it until later.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:40 +11:00
Herbert Xu 7613636def [CRYPTO] api: Add crypto_inc and crypto_xor
With the addition of more stream ciphers we need to curb the proliferation
of ad-hoc xor functions.  This patch creates a generic pair of functions,
crypto_inc and crypto_xor which does big-endian increment and exclusive or,
respectively.

For optimum performance, they both use u32 operations so alignment must be
as that of u32 even though the arguments are of type u8 *.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:17 +11:00
Herbert Xu 332f8840f7 [CRYPTO] ablkcipher: Add distinct ABLKCIPHER type
Up until now we have ablkcipher algorithms have been identified as
type BLKCIPHER with the ASYNC bit set.  This is suboptimal because
ablkcipher refers to two things.  On the one hand it refers to the
top-level ablkcipher interface with requests.  On the other hand it
refers to and algorithm type underneath.

As it is you cannot request a synchronous block cipher algorithm
with the ablkcipher interface on top.  This is a problem because
we want to be able to eventually phase out the blkcipher top-level
interface.

This patch fixes this by making ABLKCIPHER its own type, just as
we have distinct types for HASH and DIGEST.  The type it associated
with the algorithm implementation only.

Which top-level interface is used for synchronous block ciphers is
then determined by the mask that's used.  If it's a specific mask
then the old blkcipher interface is given, otherwise we go with the
new ablkcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:15 +11:00
Herbert Xu 7607bd8ff0 [CRYPTO] blkcipher: Added blkcipher_walk_virt_block
This patch adds the helper blkcipher_walk_virt_block which is similar to
blkcipher_walk_virt but uses a supplied block size instead of the block
size of the block cipher.  This is useful for CTR where the block size is
1 but we still want to walk by the block size of the underlying cipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:48 -07:00
Herbert Xu 3c09f17c3d [CRYPTO] aead: Add authenc
This patch adds the authenc algorithm which constructs an AEAD algorithm
from an asynchronous block cipher and a hash.  The construction is done
by concatenating the encrypted result from the cipher with the output
from the hash, as is used by the IPsec ESP protocol.

The authenc algorithm exists as a template with four parameters:

	authenc(auth, authsize, enc, enckeylen).

The authentication algorithm, the authentication size (i.e., truncating
the output of the authentication algorithm), the encryption algorithm,
and the encryption key length.  Both the size field and the key length
field are in bytes.  For example, AES-128 with SHA1-HMAC would be
represented by

	authenc(hmac(sha1), 12, cbc(aes), 16)

The key for the authenc algorithm is the concatenation of the keys for
the authentication algorithm with the encryption algorithm.  For the
above example, if a key of length 36 bytes is given, then hmac(sha1)
would receive the first 20 bytes while the last 16 would be given to
cbc(aes).

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:43 -07:00
Herbert Xu 2de98e7544 [CRYPTO] ablkcipher: Remove queue pointer from common alg object
Since not everyone needs a queue pointer and those who need it can
always get it from the context anyway the queue pointer in the
common alg object is redundant.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:41 -07:00
Herbert Xu 1ae978208e [CRYPTO] api: Add aead crypto type
This patch adds crypto_aead which is the interface for AEAD
(Authenticated Encryption with Associated Data) algorithms.

AEAD algorithms perform authentication and encryption in one
step.  Traditionally users (such as IPsec) would use two
different crypto algorithms to perform these.  With AEAD
this comes down to one algorithm and one operation.

Of course if traditional algorithms were used we'd still
be doing two operations underneath.  However, real AEAD
algorithms may allow the underlying operations to be
optimised as well.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:39 -07:00
Sebastian Siewior aa379a6ab1 [CRYPTO] api: Add crypto_ablkcipher_ctx_aligned
This is function does the same thing for ablkcipher that is done for
blkcipher by crypto_blkcipher_ctx_aligned(): it returns an aligned
address of the private ctx.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:37 -07:00
Herbert Xu 124b53d020 [CRYPTO] cryptd: Add software async crypto daemon
This patch adds the cryptd module which is a template that takes a
synchronous software crypto algorithm and converts it to an asynchronous
one by executing it in a kernel thread.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:32 +10:00
Herbert Xu a73e69965f [CRYPTO] api: Do not remove users unless new algorithm matches
As it is whenever a new algorithm with the same name is registered
users of the old algorithm will be removed so that they can take
advantage of the new algorithm.  This presents a problem when the
new algorithm is not equivalent to the old algorithm.  In particular,
the new algorithm might only function on top of the existing one.

Hence we should not remove users unless they can make use of the
new algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:32 +10:00
Herbert Xu b5b7f08869 [CRYPTO] api: Add async blkcipher type
This patch adds the mid-level interface for asynchronous block ciphers.
It also includes a generic queueing mechanism that can be used by other
asynchronous crypto operations in future.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:31 +10:00
Herbert Xu ebc610e5bc [CRYPTO] templates: Pass type/mask when creating instances
This patch passes the type/mask along when constructing instances of
templates.  This is in preparation for templates that may support
multiple types of instances depending on what is requested.  For example,
the planned software async crypto driver will use this construct.

For the moment this allows us to check whether the instance constructed
is of the correct type and avoid returning success if the type does not
match.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:31 +10:00