Граф коммитов

1754 Коммитов

Автор SHA1 Сообщение Дата
Herbert Xu 378f4f51f9 [CRYPTO] skcipher: Add crypto_grab_skcipher interface
Note: From now on the collective of ablkcipher/blkcipher/givcipher will
be known as skcipher, i.e., symmetric key cipher.  The name blkcipher has
always been much of a misnomer since it supports stream ciphers too.

This patch adds the function crypto_grab_skcipher as a new way of getting
an ablkcipher spawn.  The problem is that previously we did this in two
steps, first getting the algorithm and then calling crypto_init_spawn.

This meant that each spawn user had to be aware of what type and mask to
use for these two steps.  This is difficult and also presents a problem
when the type/mask changes as they're about to be for IV generators.

The new interface does both steps together just like crypto_alloc_ablkcipher.

As a side-effect this also allows us to be stronger on type enforcement
for spawns.  For now this is only done for ablkcipher but it's trivial
to extend for other types.

This patch also moves the type/mask logic for skcipher into the helpers
crypto_skcipher_type and crypto_skcipher_mask.

Finally this patch introduces the function crypto_require_sync to determine
whether the user is specifically requesting a sync algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:42 +11:00
Herbert Xu 84c9115230 [CRYPTO] gcm: Add support for async ciphers
This patch adds the necessary changes for GCM to be used with async
ciphers.  This would allow it to be used with hardware devices that
support CTR.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:42 +11:00
Herbert Xu 5311f248b7 [CRYPTO] ctr: Refactor into ctr and rfc3686
As discussed previously, this patch moves the basic CTR functionality
into a chainable algorithm called ctr.  The IPsec-specific variant of
it is now placed on top with the name rfc3686.

So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec
variant will be called rfc3686(ctr(aes)).  This patch also adjusts
gcm accordingly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:41 +11:00
Herbert Xu 653ebd9c85 [CRYPTO] blkcipher: Merge ablkcipher and blkcipher into one option/module
With the impending addition of the givcipher type, both blkcipher and
ablkcipher algorithms will use it to create givcipher objects.  As such
it no longer makes sense to split the system between ablkcipher and
blkcipher.  In particular, both ablkcipher.c and blkcipher.c would need
to use the givcipher type which has to reside in ablkcipher.c since it
shares much code with it.

This patch merges the two Kconfig options as well as the modules into one.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:41 +11:00
Herbert Xu 2589469d7b [CRYPTO] gcm: Fix request context alignment
This patch fixes the request context alignment so that it is actually
aligned to the value required by the algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:40 +11:00
Herbert Xu 68b6c7d691 [CRYPTO] api: Add crypto_attr_alg_name
This patch adds a new helper crypto_attr_alg_name which is basically the
first half of crypto_attr_alg.  That is, it returns an algorithm name
parameter as a string without looking it up.  The caller can then look it
up immediately or defer it until later.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:40 +11:00
Borislav Petkov 5e553110f2 [CRYPTO] authenc: Select HASH in Kconfig
i get here:

----
  LD      vmlinux
  SYSMAP  System.map
  SYSMAP  .tmp_System.map
  Building modules, stage 2.
  MODPOST 226 modules
ERROR: "crypto_hash_type" [crypto/authenc.ko] undefined!
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
---

which fails because crypto_hash_type is declared in crypto/hash.c. You might wanna
fix it like so:

Signed-off-by: Borislav Petkov <bbpetkov@yahoo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:39 +11:00
Herbert Xu 7c3d703fa8 [CRYPTO] authenc: Merge common hashing code
This patch merges the common hashing code between encryption and decryption.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:38 +11:00
Herbert Xu 12dc5e62b4 [CRYPTO] authenc: Use RTA_OK to check length
This patch changes setkey to use RTA_OK to check the validity of the
setkey request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:38 +11:00
Herbert Xu c2c61f513d [CRYPTO] authenc: Fix typo in ivsize
The ivsize should be fetched from ablkcipher, not blkcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:37 +11:00
Tan Swee Heng 5de8f1b562 [CRYPTO] tcrypt: Added salsa20 speed test
This patch adds a simple speed test for salsa20.
Usage: modprobe tcrypt mode=206

Signed-of-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:36 +11:00
Zoltan Sogor 0b77abb3b2 [CRYPTO] lzo: Add LZO compression algorithm support
Add LZO compression algorithm support

Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:35 +11:00
Zoltan Sogor 91755a921c [CRYPTO] tcrypt: Add common compression tester function
Add common compression tester function
Modify deflate test case to use the common compressor test function

Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:34 +11:00
Tan Swee Heng 8bff664cdf [CRYPTO] tcrypt: Salsa20 large test vector
This is a large test vector for Salsa20 that crosses the 4096-bytes
page boundary.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:34 +11:00
Tan Swee Heng eb6f13eb9f [CRYPTO] salsa20_generic: Fix multi-page processing
This patch fixes the multi-page processing bug that affects large test
vectors (the same bug that previously affected ctr.c).

There is an optimization for the case walk.nbytes == nbytes. Also we
now use crypto_xor() instead of adhoc XOR routines.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:34 +11:00
Herbert Xu 7f6813786a [CRYPTO] gcm: Put abreq in private context instead of on stack
The abreq structure is currently allocated on the stack.  This is broken
if the underlying algorithm is asynchronous.  This patch changes it so
that it's taken from the private context instead which has been enlarged
accordingly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:33 +11:00
Herbert Xu b2ab4a57b0 [CRYPTO] scatterwalk: Restore custom sg chaining for now
Unfortunately the generic chaining hasn't been ported to all architectures
yet, and notably not s390.  So this patch restores the chainging that we've
been using previously which does work everywhere.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:33 +11:00
Herbert Xu 42c271c6c5 [CRYPTO] scatterwalk: Move scatterwalk.h to linux/crypto
The scatterwalk infrastructure is used by algorithms so it needs to
move out of crypto for future users that may live in drivers/crypto
or asm/*/crypto.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:32 +11:00
Herbert Xu fe70f5dfe1 [CRYPTO] aead: Return EBADMSG for ICV mismatch
This patch changes gcm/authenc to return EBADMSG instead of EINVAL for
ICV mismatches.  This convention has already been adopted by IPsec.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:32 +11:00
Herbert Xu 6160b28992 [CRYPTO] gcm: Fix ICV handling
The crypto_aead convention for ICVs is to include it directly in the
output.  If we decided to change this in future then we would make
the ICV (if the algorithm has an explicit one) available in the
request itself.

For now no algorithm needs this so this patch changes gcm to conform
to this convention.  It also adjusts the tcrypt aead tests to take
this into account.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:31 +11:00
Herbert Xu 8df213d9b5 [CRYPTO] tcrypt: Make gcm available as a standalone test
Currently the gcm(aes) tests have to be taken together with all other
ciphers.  This patch makes it available by itself at number 35.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:30 +11:00
Herbert Xu 481f34ae75 [CRYPTO] authenc: Fix hash verification
The previous code incorrectly included the hash in the verification which
also meant that we'd crash and burn when it comes to actually verifying
the hash since we'd go past the end of the SG list.

This patch fixes that by subtracting authsize from cryptlen at the start.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:30 +11:00
Herbert Xu e236d4a89a [CRYPTO] authenc: Move enckeylen into key itself
Having enckeylen as a template parameter makes it a pain for hardware
devices that implement ciphers with many key sizes since each one would
have to be registered separately.

Since the authenc algorithm is mainly used for legacy purposes where its
key is going to be constructed out of two separate keys, we can in fact
embed this value into the key itself.

This patch does this by prepending an rtnetlink header to the key that
contains the encryption key length.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:30 +11:00
Herbert Xu 7ba683a6de [CRYPTO] aead: Make authsize a run-time parameter
As it is authsize is an algorithm paramter which cannot be changed at
run-time.  This is inconvenient because hardware that implements such
algorithms would have to register each authsize that they support
separately.

Since authsize is a property common to all AEAD algorithms, we can add
a function setauthsize that sets it at run-time, just like setkey.

This patch does exactly that and also changes authenc so that authsize
is no longer a parameter of its template.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:29 +11:00
Herbert Xu e29bc6ad0e [CRYPTO] authenc: Use or instead of max on alignment masks
Since alignment masks are always one less than a power of two, we can
use binary or to find their maximum.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:28 +11:00
Denis Cheng a10e11946b [CRYPTO] tcrypt: Use print_hex_dump from linux/kernel.h
These utilities implemented in lib/hexdump.c are more handy, please use this.

Signed-off-by: Denis Cheng <crquan@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:27 +11:00
Jan Glauber 9617d6ef62 [CRYPTO] tcrypt: AES CBC test vectors from NIST SP800-38A
Add test vectors to tcrypt for AES in CBC mode for key sizes 192 and 256.
The test vectors are copied from NIST SP800-38A.

Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:26 +11:00
Tan Swee Heng a773edb3ed [CRYPTO] tcrypt: AES CTR large test vector
This patch adds a large AES CTR mode test vector. The test vector is
4100 bytes in size. It was generated using a C++ program that called
Crypto++.

Note that this patch increases considerably the size of "struct
cipher_testvec" and hence the size of tcrypt.ko.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:25 +11:00
Tan Swee Heng 6d1a69d53a [CRYPTO] tcrypt: Support for large test vectors
Currently the number of entries in a cipher test vector template is
limited by TVMEMSIZE/sizeof(struct cipher_testvec). This patch
circumvents the problem by pointing cipher_tv to each entry in the
template, rather than the template itself.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:25 +11:00
Herbert Xu 0971eb0de9 [CRYPTO] ctr: Fix multi-page processing
When the data spans across a page boundary, CTR may incorrectly process
a partial block in the middle because the blkcipher walking code may
supply partial blocks in the middle as long as the total length of the
supplied data is more than a block.  CTR is supposed to return any unused
partial block in that case to the walker.

This patch fixes this by doing exactly that, returning partial blocks to
the walker unless we received less than a block-worth of data to start
with.

This also allows us to optimise the bulk of the processing since we no
longer have to worry about partial blocks until the very end.

Thanks to Tan Swee Heng for fixes and actually testing this :)

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:24 +11:00
Mikko Herranen 28db8e3e38 [CRYPTO] gcm: New algorithm
Add GCM/GMAC support to cryptoapi.

GCM (Galois/Counter Mode) is an AEAD mode of operations for any block cipher
with a block size of 16.  The typical example is AES-GCM.

Signed-off-by: Mikko Herranen <mh1@iki.fi>
Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:23 +11:00
Mikko Herranen e3a4ea4fd2 [CRYPTO] tcrypt: Add aead support
Add AEAD support to tcrypt, needed by GCM.

Signed-off-by: Mikko Herranen <mh1@iki.fi>
Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:23 +11:00
Denys Vlasenko ff85a8082f [CRYPTO] camellia: Move more common code into camellia_setup_tail
Analogously to camellia7 patch, move
"absorb kw2 to other subkeys" and "absorb kw4 to other subkeys"
code parts into camellia_setup_tail(). This further reduces
source and object code size at the cost of two brances
in key setup code.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:22 +11:00
Denys Vlasenko dedcf8b064 [CRYPTO] camellia: Move common code into camellia_setup_tail
Move "key XOR is end of F-function" code part into
camellia_setup_tail(), it is sufficiently similar
between camellia_setup128 and camellia_setup256.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:21 +11:00
Denys Vlasenko acca79a664 [CRYPTO] camellia: Merge encrypt/decrypt routines for all key lengths
unifies encrypt/decrypt routines for different key lengths.
This reduces module size by ~25%, with tiny (less than 1%)
speed impact.
Also collapses encrypt/decrypt into more readable
(visually shorter) form using macros.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:21 +11:00
Denys Vlasenko 2ddae4a644 [CRYPTO] camellia: Code shrink
Remove unused macro params.
Use (u8)(expr) instead of (expr) & 0xff,
helps gcc to realize how to use simpler commands.
Move CAMELLIA_FLS macro closer to encrypt/decrypt routines.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:20 +11:00
Herbert Xu 3f8214ea33 [CRYPTO] ctr: Use crypto_inc and crypto_xor
This patch replaces the custom inc/xor in CTR with the generic functions.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:20 +11:00
Herbert Xu d0b9007a27 [CRYPTO] pcbc: Use crypto_xor
This patch replaces the custom xor in CBC with the generic crypto_xor.

It changes the operations for in-place encryption slightly to avoid
calling crypto_xor with tmpbuf since it is not necessarily aligned.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:19 +11:00
Herbert Xu 50b6544e13 [CRYPTO] cbc: Require block size to be a power of 2
All common block ciphers have a block size that's a power of 2.  In fact,
all of our block ciphers obey this rule.

If we require this then CBC can be optimised to avoid an expensive divide
on in-place decryption.

I've also changed the saving of the first IV in the in-place decryption
case to the last IV because that lets us use walk->iv (which is already
aligned) for the xor operation where alignment is required.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:19 +11:00
Herbert Xu 3c7f076da5 [CRYPTO] cbc: Use crypto_xor
This patch replaces the custom xor in CBC with the generic crypto_xor.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:18 +11:00
Herbert Xu 7613636def [CRYPTO] api: Add crypto_inc and crypto_xor
With the addition of more stream ciphers we need to curb the proliferation
of ad-hoc xor functions.  This patch creates a generic pair of functions,
crypto_inc and crypto_xor which does big-endian increment and exclusive or,
respectively.

For optimum performance, they both use u32 operations so alignment must be
as that of u32 even though the arguments are of type u8 *.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:17 +11:00
Tan Swee Heng 2407d60872 [CRYPTO] salsa20: Salsa20 stream cipher
This patch implements the Salsa20 stream cipher using the blkcipher interface.

The core cipher code comes from Daniel Bernstein's submission to eSTREAM:
  http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/ref/

The test vectors comes from:
  http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/

It has been tested successfully with "modprobe tcrypt mode=34" on an
UML instance.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:15 +11:00
Herbert Xu 332f8840f7 [CRYPTO] ablkcipher: Add distinct ABLKCIPHER type
Up until now we have ablkcipher algorithms have been identified as
type BLKCIPHER with the ASYNC bit set.  This is suboptimal because
ablkcipher refers to two things.  On the one hand it refers to the
top-level ablkcipher interface with requests.  On the other hand it
refers to and algorithm type underneath.

As it is you cannot request a synchronous block cipher algorithm
with the ablkcipher interface on top.  This is a problem because
we want to be able to eventually phase out the blkcipher top-level
interface.

This patch fixes this by making ABLKCIPHER its own type, just as
we have distinct types for HASH and DIGEST.  The type it associated
with the algorithm implementation only.

Which top-level interface is used for synchronous block ciphers is
then determined by the mask that's used.  If it's a specific mask
then the old blkcipher interface is given, otherwise we go with the
new ablkcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:15 +11:00
Herbert Xu 468577abe3 [CRYPTO] scatterwalk: Use generic scatterlist chaining
This patch converts the crypto scatterwalk code to use the generic
scatterlist chaining rather the version specific to crypto.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:14 +11:00
Jonathan Lynch cd12fb906d [CRYPTO] sha256-generic: Extend sha256_generic.c to support SHA-224
Resubmitting this patch which extends sha256_generic.c to support SHA-224 as
described in FIPS 180-2 and RFC 3874. HMAC-SHA-224 as described in RFC4231
is then supported through the hmac interface.

Patch includes test vectors for SHA-224 and HMAC-SHA-224.

SHA-224 chould be chosen as a hash algorithm when 112 bits of security
strength is required.

Patch generated against the 2.6.24-rc1 kernel and tested against
2.6.24-rc1-git14 which includes fix for scatter gather implementation for HMAC.

Signed-off-by: Jonathan Lynch <jonathan.lynch@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:12 +11:00
Sebastian Siewior 5157dea813 [CRYPTO] aes-i586: Remove setkey
The setkey() function can be shared with the generic algorithm.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:10 +11:00
Sebastian Siewior b345cee90a [CRYPTO] ctr: Remove default M
NO other block mode is M by default.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:10 +11:00
Sebastian Siewior 81190b3215 [CRYPTO] aes-x86-64: Remove setkey
The setkey() function can be shared with the generic algorithm.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:10 +11:00
Sebastian Siewior 96e82e4551 [CRYPTO] aes-generic: Make key generation exportable
This patch exports four tables and the set_key() routine. This ressources
can be shared by other AES implementations (aes-x86_64 for instance).
The decryption key has been turned around (deckey[0] is the first piece
of the key instead of deckey[keylen+20]). The encrypt/decrypt functions
are looking now identical (except they are using different tables and
key).

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:09 +11:00
Sebastian Siewior be5fb27012 [CRYPTO] aes-generic: Coding style cleanup
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:09 +11:00
Joy Latten 41fdab3dd3 [CRYPTO] ctr: Add countersize
This patch adds countersize to CTR mode.
The template is now ctr(algo,noncesize,ivsize,countersize).

For example, ctr(aes,4,8,4) indicates the counterblock
will be composed of a salt/nonce that is 4 bytes, an iv
that is 8 bytes and the counter is 4 bytes.

When noncesize + ivsize < blocksize, CTR initializes the
last block - ivsize - noncesize portion of the block to
zero.  Otherwise the counter block is composed of the IV
(and nonce if necessary).

If noncesize + ivsize == blocksize, then this indicates that
user is passing in entire counterblock. Thus countersize
indicates the amount of bytes in counterblock to use as
the counter for incrementing. CTR will increment counter
portion by 1, and begin encryption with that value.

Note that CTR assumes the counter portion of the block that
will be incremented is stored in big endian.

Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:08 +11:00
Denys Vlasenko d3e7480572 [CRYPTO] camellia: De-unrolling
Move huge unrolled pieces of code (3 screenfuls) at the end of
128/256 key setup routines into common camellia_setup_tail(),
convert it to loop there.
Loop is still unrolled six times, so performance hit is very small,
code size win is big.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:08 +11:00
Denys Vlasenko 1ce73e8d6d [CRYPTO] camellia: Code cleanup
Optimize GETU32 to use 4-byte memcpy (modern gcc will convert
such memcpy to single move instruction on i386).
Original GETU32 did four byte fetches, and shifted/XORed those.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:07 +11:00
Denys Vlasenko 3a5e5f8108 [CRYPTO] camellia: Code cleanup
Rename some macros to shorter names: CAMELLIA_RR8 -> ROR8,
making it easier to understand that it is just a right rotation,
nothing camellia-specific in it.
CAMELLIA_SUBKEY_L() -> SUBKEY_L() - just shorter.

Move be32 <-> cpu conversions out of en/decrypt128/256 and into
camellia_en/decrypt - no reason to have that code duplicated twice.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:07 +11:00
Denys Vlasenko 1721a81256 [CRYPTO] camellia: Code cleanup
Move code blocks around so that related pieces are closer together:
e.g. CAMELLIA_ROUNDSM macro does not need to be separated
from the rest of the code by huge array of constants.

Remove unused macros (COPY4WORD, SWAP4WORD, XOR4WORD[2])

Drop SUBL(), SUBR() macros which only obscure things.
Same for CAMELLIA_SP1110() macro and KEY_TABLE_TYPE typedef.

Remove useless comments:
/* encryption */ -- well it's obvious enough already!
void camellia_encrypt128(...)

Combine swap with copying at the beginning/end of encrypt/decrypt.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:06 +11:00
Denys Vlasenko e2b21b5002 [CRYPTO] twofish: Do not unroll big stuff in twofish key setup
Currently twofish cipher key setup code
has unrolled loops - approximately 70-100
instructions are repeated 40 times.

As a result, twofish module is the biggest module
in crypto/*.

Unrolling produces x2.5 more code (+18k on i386), and speeds up key
setup by 7%:

	unrolled: twofish_setkey/sec: 41128
	    loop: twofish_setkey/sec: 38148
	CALC_K256: ~100 insns each
	CALC_K192: ~90 insns
	   CALC_K: ~70 insns

Attached patch removes this unrolling.

$ size */twofish_common.o
   text    data     bss     dec     hex filename
  37920       0       0   37920    9420 crypto.org/twofish_common.o
  13209       0       0   13209    3399 crypto/twofish_common.o

Run tested (modprobe tcrypt reports ok). Please apply.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:06 +11:00
Sebastian Siewior 89e1265431 [CRYPTO] aes: Move common defines into a header file
This three defines are used in all AES related hardware.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:04 +11:00
Evgeniy Polyakov c3041f9c93 [CRYPTO] hifn_795x: Detect weak keys
HIFN driver update to use DES weak key checks (exported in this patch).

Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:03 +11:00
Evgeniy Polyakov 16d004a2ed [CRYPTO] des: Create header file for common macros
This patch creates include/crypto/des.h for common macros shared between
DES implementations.

Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:02 +11:00
Joy Latten 23e353c8a6 [CRYPTO] ctr: Add CTR (Counter) block cipher mode
This patch implements CTR mode for IPsec.
It is based off of RFC 3686.

Please note:
1. CTR turns a block cipher into a stream cipher.
Encryption is done in blocks, however the last block
may be a partial block.

A "counter block" is encrypted, creating a keystream
that is xor'ed with the plaintext. The counter portion
of the counter block is incremented after each block
of plaintext is encrypted.
Decryption is performed in same manner.

2. The CTR counterblock is composed of,
        nonce + IV + counter

The size of the counterblock is equivalent to the
blocksize of the cipher.
        sizeof(nonce) + sizeof(IV) + sizeof(counter) = blocksize

The CTR template requires the name of the cipher
algorithm, the sizeof the nonce, and the sizeof the iv.
        ctr(cipher,sizeof_nonce,sizeof_iv)

So for example,
        ctr(aes,4,8)
specifies the counterblock will be composed of 4 bytes
from a nonce, 8 bytes from the iv, and 4 bytes for counter
since aes has a blocksize of 16 bytes.

3. The counter portion of the counter block is stored
in big endian for conformance to rfc 3686.

Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:01 +11:00
Al Viro 3c50b3683a fcrypt endianness misannotations
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-12-05 09:25:20 -08:00
Herbert Xu 38cb2419f5 [CRYPTO] api: Fix potential race in crypto_remove_spawn
As it is crypto_remove_spawn may try to unregister an instance which is
yet to be registered.  This patch fixes this by checking whether the
instance has been registered before attempting to remove it.

It also removes a bogus cra_destroy check in crypto_register_instance as
1) it's outside the mutex;
2) we have a check in __crypto_register_alg already.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-11-23 19:32:09 +08:00
Herbert Xu f347c4facf [CRYPTO] authenc: Move initialisations up to shut up gcc
It seems that newer versions of gcc have regressed in their abilities to
analyse initialisations.  This patch moves the initialisations up to avoid
the warnings.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-11-23 19:32:09 +08:00
Adrian Bunk 87ae9afdca cleanup asm/scatterlist.h includes
Not architecture specific code should not #include <asm/scatterlist.h>.

This patch therefore either replaces them with
#include <linux/scatterlist.h> or simply removes them if they were
unused.

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-11-02 08:47:06 +01:00
Herbert Xu a5a613a429 [CRYPTO] tcrypt: Move sg_init_table out of timing loops
This patch moves the sg_init_table out of the timing loops for hash
algorithms so that it doesn't impact on the speed test results.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-27 00:51:21 -07:00
David S. Miller b733588559 [CRYPTO]: Initialize TCRYPT on-stack scatterlist objects correctly.
Use sg_init_one() and sg_init_table() as needed.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-26 00:38:10 -07:00
David S. Miller a6767721a5 [CRYPTO]: HMAC needs some more scatterlist fixups.
hmac_setkey(), hmac_init(), and hmac_final() have
a singular on-stack scatterlist.  Initialit is
using sg_init_one() instead of using sg_set_buf().

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-26 00:37:12 -07:00
Vlad Yasevich 41fb285430 [CRYPTO]: Fix hmac_digest from the SG breakage.
Crypto now uses SG helper functions.  Fix hmac_digest to use those
functions correctly and fix the oops associated with it.

Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-25 18:46:26 -07:00
Jens Axboe 642f149031 SG: Change sg_set_page() to take length and offset argument
Most drivers need to set length and offset as well, so may as well fold
those three lines into one.

Add sg_assign_page() for those two locations that only needed to set
the page, where the offset/length is set outside of the function context.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-24 11:20:47 +02:00
Jens Axboe 78c2f0b8c2 [SG] Update crypto/ to sg helpers
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-22 19:40:16 +02:00
John Anthony Kazos Jr 991d17403c crypto: convert "crypto" subdirectory to UTF-8
Convert the subdirectory "crypto" to UTF-8. The files changed are
<crypto/fcrypt.c> and <crypto/api.c>.

Signed-off-by: John Anthony Kazos Jr. <jakj@j-a-k-j.com>
Signed-off-by: Adrian Bunk <bunk@kernel.org>
2007-10-19 23:06:17 +02:00
Jens Axboe ab83407e9e crypto: don't pollute the global namespace with sg_next()
It's a subsystem function, prefix it as such.

Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-16 11:07:09 +02:00
Jan Glauber 5265eeb2b0 [CRYPTO] sha: Add header file for SHA definitions
There are currently several SHA implementations that all define their own
initialization vectors and size values. Since this values are idential
move them to a header file under include/crypto.

Signed-off-by: Jan Glauber <jang@de.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:50 -07:00
Sebastian Siewior ad5d27899f [CRYPTO] sha: Load the SHA[1|256] module by an alias
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.

Additionally it ensures that the generic implementation as well as the
HW driver (if available) is loaded in case the HW driver needs the
generic version as fallback in corner cases.

Also remove the probe for sha1 in padlock's init code.

Quote from Herbert:
  The probe is actually pointless since we can always probe when
  the algorithm is actually used which does not lead to dead-locks
  like this.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:50 -07:00
Sebastian Siewior f8246af005 [CRYPTO] aes: Rename aes to aes-generic
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.

Additionally it ensures that the generic implementation as well as the
HW driver (if available) is loaded in case the HW driver needs the
generic version as fallback in corner cases.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:49 -07:00
Sebastian Siewior c5a511f1cd [CRYPTO] des: Rename des to des-generic
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:49 -07:00
Herbert Xu 7607bd8ff0 [CRYPTO] blkcipher: Added blkcipher_walk_virt_block
This patch adds the helper blkcipher_walk_virt_block which is similar to
blkcipher_walk_virt but uses a supplied block size instead of the block
size of the block cipher.  This is useful for CTR where the block size is
1 but we still want to walk by the block size of the underlying cipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:48 -07:00
Herbert Xu 2614de1b9a [CRYPTO] blkcipher: Increase kmalloc amount to aligned block size
Now that the block size is no longer a multiple of the alignment, we need to
increase the kmalloc amount in blkcipher_next_slow to use the aligned block
size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:48 -07:00
Herbert Xu d8058480b3 [CRYPTO] api: Explain the comparison on larval cra_name
This patch adds a comment to explain why we compare the cra_driver_name of
the algorithm being registered against the cra_name of a larval as opposed
to the cra_driver_name of the larval.

In fact larvals have only one name, cra_name which is the name that was
requested by the user.  The test here is simply trying to find out whether
the algorithm being registered can or can not satisfy the larval.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:47 -07:00
Herbert Xu 70613783fc [CRYPTO] blkcipher: Remove alignment restriction on block size
Previously we assumed for convenience that the block size is a multiple of
the algorithm's required alignment.  With the pending addition of CTR this
will no longer be the case as the block size will be 1 due to it being a
stream cipher.  However, the alignment requirement will be that of the
underlying implementation which will most likely be greater than 1.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:46 -07:00
Herbert Xu e4c5c6c9b0 [CRYPTO] authenc: Kill spaces in algorithm names
We do not allow spaces in algorithm names or parameters.  Thanks to Joy Latten
for pointing this out.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:46 -07:00
Herbert Xu 720a650f8a [CRYPTO] cryptomgr: Fix parsing of recursive algorithms
As Joy Latten points out, inner algorithm parameters will miss the closing
bracket which will also cause the outer algorithm to terminate prematurely.

This patch fixes that also kills the WARN_ON if the number of parameters
exceed the maximum as that is a user error.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:45 -07:00
Rik Snel f19f5111c9 [CRYPTO] xts: XTS blockcipher mode implementation without partial blocks
XTS currently considered to be the successor of the LRW mode by the IEEE1619
workgroup. LRW was discarded, because it was not secure if the encyption key
itself is encrypted with LRW.

XTS does not have this problem. The implementation is pretty straightforward,
a new function was added to gf128mul to handle GF(128) elements in ble format.
Four testvectors from the specification
	http://grouper.ieee.org/groups/1619/email/pdf00086.pdf
were added, and they verify on my system.

Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:45 -07:00
Ingo Oeser 5aaff0c8f7 [CRYPTO] blkcipher: Use max() in blkcipher_get_spot() to state the intention
Use max in blkcipher_get_spot() instead of open coding it.

Signed-off-by: Ingo Oeser <ioe-lkml@rameria.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:44 -07:00
Herbert Xu 70dec235d8 [CRYPTO] api: Kill crypto_km_types
When scatterwalk is built as a module digest.c was broken because it
requires the crypto_km_types structure which is in scatterwalk.  This
patch removes the crypto_km_types structure by encoding the logic into
crypto_kmap_type directly.

In fact, this even saves a few bytes of code (not to mention the data
structure itself) on i386 which is about the only place where it's
needed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:44 -07:00
Herbert Xu 3c09f17c3d [CRYPTO] aead: Add authenc
This patch adds the authenc algorithm which constructs an AEAD algorithm
from an asynchronous block cipher and a hash.  The construction is done
by concatenating the encrypted result from the cipher with the output
from the hash, as is used by the IPsec ESP protocol.

The authenc algorithm exists as a template with four parameters:

	authenc(auth, authsize, enc, enckeylen).

The authentication algorithm, the authentication size (i.e., truncating
the output of the authentication algorithm), the encryption algorithm,
and the encryption key length.  Both the size field and the key length
field are in bytes.  For example, AES-128 with SHA1-HMAC would be
represented by

	authenc(hmac(sha1), 12, cbc(aes), 16)

The key for the authenc algorithm is the concatenation of the keys for
the authentication algorithm with the encryption algorithm.  For the
above example, if a key of length 36 bytes is given, then hmac(sha1)
would receive the first 20 bytes while the last 16 would be given to
cbc(aes).

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:43 -07:00
Herbert Xu 5fa0fea274 [CRYPTO] scatterwalk: Add scatterwalk_map_and_copy
This patch adds the function scatterwalk_map_and_copy which reads or
writes a chunk of data from a scatterlist at a given offset.  It will
be used by authenc which would read/write the authentication data at
the end of the cipher/plain text.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:42 -07:00
Herbert Xu e962a653f3 [CRYPTO] api: Move scatterwalk into algapi
The scatterwalk code is only used by algorithms that can be built as
a module.  Therefore we can move it into algapi.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:41 -07:00
Herbert Xu 2de98e7544 [CRYPTO] ablkcipher: Remove queue pointer from common alg object
Since not everyone needs a queue pointer and those who need it can
always get it from the context anyway the queue pointer in the
common alg object is redundant.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:41 -07:00
Herbert Xu 791b4d5f73 [CRYPTO] api: Add missing headers for setkey_unaligned
This patch ensures that kernel.h and slab.h are included for
the setkey_unaligned function.  It also breaks a couple of
long lines.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:40 -07:00
Herbert Xu 39e1ee011f [CRYPTO] api: Add support for multiple template parameters
This patch adds support for having multiple parameters to
a template, separated by a comma.  It also adds support
for integer parameters in addition to the current algorithm
parameter type.

This will be used by the authenc template which will have
four parameters: the authentication algorithm, the encryption
algorithm, the authentication size and the encryption key
length.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:40 -07:00
Herbert Xu 1ae978208e [CRYPTO] api: Add aead crypto type
This patch adds crypto_aead which is the interface for AEAD
(Authenticated Encryption with Associated Data) algorithms.

AEAD algorithms perform authentication and encryption in one
step.  Traditionally users (such as IPsec) would use two
different crypto algorithms to perform these.  With AEAD
this comes down to one algorithm and one operation.

Of course if traditional algorithms were used we'd still
be doing two operations underneath.  However, real AEAD
algorithms may allow the underlying operations to be
optimised as well.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:39 -07:00
Hye-Shik Chang e2ee95b8c6 [CRYPTO] seed: New cipher algorithm
This patch adds support for the SEED cipher (RFC4269).

This patch have been used in few VPN appliance vendors in Korea for
several years.  And it was verified by KISA, who developed the
algorithm itself.

As its importance in Korean banking industry, it would be great
if linux incorporates the support.

Signed-off-by: Hye-Shik Chang <perky@FreeBSD.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:38 -07:00
Adrian Bunk a349365e5e [CRYPTO] Kconfig: Remove "default m"s
Other options requiring specific block cipher algorithms already have
the appropriate select's.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:36 -07:00
Dan Williams 6247cdc2cd async_tx: fix dma_wait_for_async_tx
Fix dma_wait_for_async_tx to not loop forever in the case where a
dependency chain is longer than two entries.  This condition will not
happen with current in-kernel drivers, but fix it for future drivers.

Found-by: Saeed Bishara <saeed.bishara@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-09-24 10:26:26 -07:00
Herbert Xu 32528d0fbd [CRYPTO] blkcipher: Fix inverted test in blkcipher_get_spot
The previous patch had the conditional inverted.  This patch fixes it
so that we return the original position if it does not straddle a page.

Thanks to Bob Gilligan for spotting this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-09-10 15:51:11 +08:00
Herbert Xu e4630f9fd8 [CRYPTO] blkcipher: Fix handling of kmalloc page straddling
The function blkcipher_get_spot tries to return a buffer of
the specified length that does not straddle a page.  It has
an off-by-one bug so it may advance a page unnecessarily.

What's worse, one of its callers doesn't provide a buffer
that's sufficiently long for this operation.

This patch fixes both problems.  Thanks to Bob Gilligan for
diagnosing this problem and providing a fix.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-09-09 08:45:21 +01:00
Sebastian Siewior 0681717678 [CRYPTO] api: fix writting into unallocated memory in setkey_aligned
setkey_unaligned() commited in ca7c39385c
overwrites unallocated memory in the following memset() because
I used the wrong buffer length.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-08-06 15:33:56 +08:00
Dan Williams eb0645a8b1 async_tx: fix kmap_atomic usage in async_memcpy
Andrew Morton:
	[async_memcpy] is very wrong if both ASYNC_TX_KMAP_DST and
	ASYNC_TX_KMAP_SRC can ever be set.  We'll end up using the same kmap
	slot for both src add dest and we get either corrupted data or a BUG.

Evgeniy Polyakov:
	Btw, shouldn't it always be kmap_atomic() even if flag is not set.
	That pages are usual one returned by alloc_page().

So fix the usage of kmap_atomic and kill the ASYNC_TX_KMAP_DST and
ASYNC_TX_KMAP_SRC flags.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-20 08:44:19 -07:00
Pavel Emelianov 13d31894b3 Make crypto API use seq_list_xxx helpers
Simple and stupid - just use the same code from another place in the kernel.

Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-16 09:05:42 -07:00
David S. Miller d09f51b699 Merge master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6
Conflicts:

	crypto/Kconfig
2007-07-14 23:47:04 -07:00
Dan Williams 9bc89cd82d async_tx: add the async_tx api
The async_tx api provides methods for describing a chain of asynchronous
bulk memory transfers/transforms with support for inter-transactional
dependencies.  It is implemented as a dmaengine client that smooths over
the details of different hardware offload engine implementations.  Code
that is written to the api can optimize for asynchronous operation and the
api will fit the chain of operations to the available offload resources. 
 
	I imagine that any piece of ADMA hardware would register with the
	'async_*' subsystem, and a call to async_X would be routed as
	appropriate, or be run in-line. - Neil Brown

async_tx exploits the capabilities of struct dma_async_tx_descriptor to
provide an api of the following general format:

struct dma_async_tx_descriptor *
async_<operation>(..., struct dma_async_tx_descriptor *depend_tx,
			dma_async_tx_callback cb_fn, void *cb_param)
{
	struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>);
	struct dma_device *device = chan ? chan->device : NULL;
	int int_en = cb_fn ? 1 : 0;
	struct dma_async_tx_descriptor *tx = device ?
		device->device_prep_dma_<operation>(chan, len, int_en) : NULL;

	if (tx) { /* run <operation> asynchronously */
		...
		tx->tx_set_dest(addr, tx, index);
		...
		tx->tx_set_src(addr, tx, index);
		...
		async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
	} else { /* run <operation> synchronously */
		...
		<operation>
		...
		async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
	}

	return tx;
}

async_tx_find_channel() returns a capable channel from its pool.  The
channel pool is organized as a per-cpu array of channel pointers.  The
async_tx_rebalance() routine is tasked with managing these arrays.  In the
uniprocessor case async_tx_rebalance() tries to spread responsibility
evenly over channels of similar capabilities.  For example if there are two
copy+xor channels, one will handle copy operations and the other will
handle xor.  In the SMP case async_tx_rebalance() attempts to spread the
operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor
channel0 while cpu1 gets copy channel 1 and xor channel 1.  When a
dependency is specified async_tx_find_channel defaults to keeping the
operation on the same channel.  A xor->copy->xor chain will stay on one
channel if it supports both operation types, otherwise the transaction will
transition between a copy and a xor resource.

Currently the raid5 implementation in the MD raid456 driver has been
converted to the async_tx api.  A driver for the offload engines on the
Intel Xscale series of I/O processors, iop-adma, is provided in a later
commit.  With the iop-adma driver and async_tx, raid456 is able to offload
copy, xor, and xor-zero-sum operations to hardware engines.
 
On iop342 tiobench showed higher throughput for sequential writes (20 - 30%
improvement) and sequential reads to a degraded array (40 - 55%
improvement).  For the other cases performance was roughly equal, +/- a few
percentage points.  On a x86-smp platform the performance of the async_tx
implementation (in synchronous mode) was also +/- a few percentage points
of the original implementation.  According to 'top' on iop342 CPU
utilization drops from ~50% to ~15% during a 'resync' while the speed
according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s.
 
The tiobench command line used for testing was: tiobench --size 2048
--block 4096 --block 131072 --dir /mnt/raid --numruns 5
* iop342 had 1GB of memory available

Details:
* if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making
  async_tx_find_channel a static inline routine that always returns NULL
* when a callback is specified for a given transaction an interrupt will
  fire at operation completion time and the callback will occur in a
  tasklet.  if the the channel does not support interrupts then a live
  polling wait will be performed
* the api is written as a dmaengine client that requests all available
  channels
* In support of dependencies the api implicitly schedules channel-switch
  interrupts.  The interrupt triggers the cleanup tasklet which causes
  pending operations to be scheduled on the next channel
* Xor engines treat an xor destination address differently than a software
  xor routine.  To the software routine the destination address is an implied
  source, whereas engines treat it as a write-only destination.  This patch
  modifies the xor_blocks routine to take a an explicit destination address
  to mirror the hardware.

Changelog:
* fixed a leftover debug print
* don't allow callbacks in async_interrupt_cond
* fixed xor_block changes
* fixed usage of ASYNC_TX_XOR_DROP_DEST
* drop dma mapping methods, suggested by Chris Leech
* printk warning fixups from Andrew Morton
* don't use inline in C files, Adrian Bunk
* select the API when MD is enabled
* BUG_ON xor source counts <= 1
* implicitly handle hardware concerns like channel switching and
  interrupts, Neil Brown
* remove the per operation type list, and distribute operation capabilities
  evenly amongst the available channels
* simplify async_tx_find_channel to optimize the fast path
* introduce the channel_table_initialized flag to prevent early calls to
  the api
* reorganize the code to mimic crypto
* include mm.h as not all archs include it in dma-mapping.h
* make the Kconfig options non-user visible, Adrian Bunk
* move async_tx under crypto since it is meant as 'core' functionality, and
  the two may share algorithms in the future
* move large inline functions into c files
* checkpatch.pl fixes
* gpl v2 only correction

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-By: NeilBrown <neilb@suse.de>
2007-07-13 08:06:14 -07:00
Dan Williams 685784aaf3 xor: make 'xor_blocks' a library routine for use with async_tx
The async_tx api tries to use a dma engine for an operation, but will fall
back to an optimized software routine otherwise.  Xor support is
implemented using the raid5 xor routines.  For organizational purposes this
routine is moved to a common area.

The following fixes are also made:
* rename xor_block => xor_blocks, suggested by Adrian Bunk
* ensure that xor.o initializes before md.o in the built-in case
* checkpatch.pl fixes
* mark calibrate_xor_blocks __init, Adrian Bunk

Cc: Adrian Bunk <bunk@stusta.de>
Cc: NeilBrown <neilb@suse.de>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-07-13 08:06:14 -07:00
Sebastian Siewior e559e91cce [CRYPTO] api: Allow ablkcipher with no queues
Evgeniy's hifn driver and probably mine don't use ablkcipher->queue at all. 
The show method of ablkcipher will access this field without checking if it 
is valid.

Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:55 +08:00
Sebastian Siewior ca7c39385c [CRYPTO] api: Handle unaligned keys in setkey
setkey() in {cipher,blkcipher,ablkcipher,hash}.c does not respect the
requested alignment by the algorithm. This patch fixes it. The extra
memory is allocated by kmalloc() with GFP_ATOMIC flag.

Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:54 +08:00
Herbert Xu fe3c5206ad [CRYPTO] api: Wake up all waiters when larval completes
Right now when a larval matures or when it dies of an error we
only wake up one waiter.  This would cause other waiters to timeout
unnecessarily.  This patch changes it to use complete_all to wake
up all waiters.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:53 +08:00
Jan Engelhardt 2e290f43dd [CRYPTO] Kconfig: Use menuconfig objects
Use menuconfigs instead of menus, so the whole menu can be disabled at once
instead of going through all options.

Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:53 +08:00
Rafael J. Wysocki 189fe3174c [CRYPTO] cryptd: Fix problem with cryptd and the freezer
Make sure that cryptd is marked as nonfreezable and does not hold up the
freezer.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-31 18:10:22 +10:00
Herbert Xu da7cd59ab9 [CRYPTO] api: Read module pointer before freeing algorithm
The function crypto_mod_put first frees the algorithm and then drops
the reference to its module.  Unfortunately we read the module pointer
which after freeing the algorithm and that pointer sits inside the
object that we just freed.

So this patch reads the module pointer out before we free the object.

Thanks to Luca Tettamanti for reporting this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-19 14:51:00 +10:00
Herbert Xu 29059d12e0 [CRYPTO] tcrypt: Add missing error check
The return value of crypto_hash_final isn't checked in test_hash_cycles.
This patch corrects this.  Thanks to Eric Sesterhenn for reporting this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-18 16:25:19 +10:00
David Sterba 3dde6ad8fc Fix trivial typos in Kconfig* files
Fix several typos in help text in Kconfig* files.

Signed-off-by: David Sterba <dave@jikos.cz>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
2007-05-09 07:12:20 +02:00
Herbert Xu 1605b8471d [CRYPTO] cryptomgr: Fix use after free
By the time kthread_run returns the param may have already been freed
so writing the returned thread_struct pointer to param is wrong.

In fact, we don't need it in param anyway so this patch simply puts it
on the stack.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-09 13:04:39 +10:00
Herbert Xu 124b53d020 [CRYPTO] cryptd: Add software async crypto daemon
This patch adds the cryptd module which is a template that takes a
synchronous software crypto algorithm and converts it to an asynchronous
one by executing it in a kernel thread.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:32 +10:00
Herbert Xu a73e69965f [CRYPTO] api: Do not remove users unless new algorithm matches
As it is whenever a new algorithm with the same name is registered
users of the old algorithm will be removed so that they can take
advantage of the new algorithm.  This presents a problem when the
new algorithm is not equivalent to the old algorithm.  In particular,
the new algorithm might only function on top of the existing one.

Hence we should not remove users unless they can make use of the
new algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:32 +10:00
Herbert Xu cf02f5da94 [CRYPTO] cryptomgr: Fix parsing of nested templates
This patch allows the use of nested templates by allowing the use of
brackets inside a template parameter.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:31 +10:00
Herbert Xu b5b7f08869 [CRYPTO] api: Add async blkcipher type
This patch adds the mid-level interface for asynchronous block ciphers.
It also includes a generic queueing mechanism that can be used by other
asynchronous crypto operations in future.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:31 +10:00
Herbert Xu ebc610e5bc [CRYPTO] templates: Pass type/mask when creating instances
This patch passes the type/mask along when constructing instances of
templates.  This is in preparation for templates that may support
multiple types of instances depending on what is requested.  For example,
the planned software async crypto driver will use this construct.

For the moment this allows us to check whether the instance constructed
is of the correct type and avoid returning success if the type does not
match.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:31 +10:00
Herbert Xu 6158efc090 [CRYPTO] tcrypt: Use async blkcipher interface
This patch converts the tcrypt module to use the asynchronous block cipher
interface.  As all synchronous block ciphers can be used through the async
interface, tcrypt is still able to test them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:30 +10:00
Herbert Xu 32e3983fe5 [CRYPTO] api: Add async block cipher interface
This patch adds the frontend interface for asynchronous block ciphers.
In addition to the usual block cipher parameters, there is a callback
function pointer and a data pointer.  The callback will be invoked only
if the encrypt/decrypt handlers return -EINPROGRESS.  In other words,
if the return value of zero the completion handler (or the equivalent
code) needs to be invoked by the caller.

The request structure is allocated and freed by the caller.  Its size
is determined by calling crypto_ablkcipher_reqsize().  The helpers
ablkcipher_request_alloc/ablkcipher_request_free can be used to manage
the memory for a request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:30 +10:00
Herbert Xu 03f5d8cedb [CRYPTO] api: Proc functions should be marked as unused
The proc functions were incorrectly marked as used rather than unused.
They may be unused if proc is disabled.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:29 +10:00
Jouni Malinen 85d32e7b0e [PATCH] Update my email address from jkmaline@cc.hut.fi to j@w1.fi
After 13 years of use, it looks like my email address is finally going
to disappear. While this is likely to drop the amount of incoming spam
greatly ;-), it may also affect more appropriate messages, so let's
update my email address in various places. In addition, Host AP mailing
list is subscribers-only and linux-wireless can also be used for
discussing issues related to this driver which is now shown in
MAINTAINERS.

Signed-off-by: Jouni Malinen <j@w1.fi>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2007-04-28 11:01:01 -04:00
Herbert Xu 9f11672728 [CRYPTO] api: Flush the current page right than the next
On platforms where flush_dcache_page is needed we're currently flushing
the next page right than the one we've just processed.  This patch fixes
the off-by-one error.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-03-31 12:58:20 +10:00
Herbert Xu 4ee531a3e6 [CRYPTO] api: Use the right value when advancing scatterwalk_copychunks
In the scatterwalk_copychunks loop, We should be advancing by
len_this_page and not nbytes.  The latter is the total length.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-03-31 12:16:20 +10:00
Sebastian Siewior 7bc301e97b [CRYPTO] tcrypt: Fix error checking for comp allocation
This patch fixes loading the tcrypt module while deflate isn't available
at all (isn't build).

Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-03-21 08:58:43 +11:00
J. Bruce Fields f70ee5ec8f [CRYPTO] api: scatterwalk_copychunks() fails to advance through scatterlist
In the loop in scatterwalk_copychunks(), if walk->offset is zero,
then scatterwalk_pagedone rounds that up to the nearest page boundary:

		walk->offset += PAGE_SIZE - 1;
		walk->offset &= PAGE_MASK;

which is a no-op in this case, so we don't advance to the next element
of the scatterlist array:

		if (walk->offset >= walk->sg->offset + walk->sg->length)
			scatterwalk_start(walk, sg_next(walk->sg));

and we end up copying the same data twice.

It appears that other callers of scatterwalk_{page}done first advance
walk->offset, so I believe that's the correct thing to do here.

This caused a bug in NFS when run with krb5p security, which would
cause some writes to fail with permissions errors--for example, writes
of less than 8 bytes (the des blocksize) at the start of a file.

A git-bisect shows the bug was originally introduced by
5c64097aa0, first in 2.6.19-rc1.

Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-03-21 08:50:12 +11:00
Arjan van de Ven 2b8693c061 [PATCH] mark struct file_operations const 3
Many struct file_operations in the kernel can be "const".  Marking them const
moves these to the .rodata section, which avoids false sharing with potential
dirty data.  In addition it'll catch accidental writes at compile time to
these shared resources.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-12 09:48:45 -08:00
David S. Miller 9783e1df7a Merge branch 'HEAD' of master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6
Conflicts:

	crypto/Kconfig
2007-02-08 15:25:18 -08:00
Noriaki TAKAMIYA 02ab5a7056 [CRYPTO] camellia: added the testing code of Camellia cipher
This patch adds the code of Camellia code for testing module.

Signed-off-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:04 +11:00
Noriaki TAKAMIYA d64beac050 [CRYPTO] camellia: added the code of Camellia cipher algorithm.
This patch adds the main code of Camellia cipher algorithm.

Signed-off-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:03 +11:00
Noriaki TAKAMIYA 04ac7db3f2 [CRYPTO] camellia: Add Kconfig entry.
This patch adds the Kconfig entry for Camellia.

Signed-off-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:03 +11:00
Herbert Xu 6b701dde8e [CRYPTO] xcbc: Use new cipher interface
This patch changes xcbc to use the new cipher encryt_one interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:01 +11:00
Herbert Xu 27d2a33007 [CRYPTO] api: Allow multiple frontends per backend
This patch adds support for multiple frontend types for each backend
algorithm by passing the type and mask through to the backend type
init function.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:01 +11:00
Herbert Xu 2e306ee016 [CRYPTO] api: Add type-safe spawns
This patch allows spawns of specific types (e.g., cipher) to be allocated.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:01 +11:00
Herbert Xu f1ddcaf339 [CRYPTO] api: Remove deprecated interface
This patch removes the old cipher interface and related code.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:00 +11:00
Herbert Xu ba8da2a948 [CRYPTO] tcrypt: Removed vestigial crypto_alloc_tfm call
The crypto_comp conversion missed the last remaining crypto_alloc_tfm
call.  This patch replaces it with crypto_alloc_comp.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:21:00 +11:00
David Howells 90831639a6 [CRYPTO] fcrypt: Add FCrypt from RxRPC
Add a crypto module to provide FCrypt encryption as used by RxRPC.

Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:20:59 +11:00
David Howells 91652be5d1 [CRYPTO] pcbc: Add Propagated CBC template
Add PCBC crypto template support as used by RxRPC.

Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:20:59 +11:00
Andrew Donofrio a28091ae17 [CRYPTO] tcrypt: Added test vectors for sha384/sha512
This patch adds tests for SHA384 HMAC and SHA512 HMAC to the tcrypt module. Test data was taken from
RFC4231. This patch is a follow-up to the discovery (bug 7646) that the kernel SHA384 HMAC
implementation was not generating proper SHA384 HMACs.

Signed-off-by: Andrew Donofrio <linuxbugzilla@kriptik.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:20:58 +11:00
Herbert Xu fb469840b8 [CRYPTO] all: Check for usage in hard IRQ context
Using blkcipher/hash crypto operations in hard IRQ context can lead
to random memory corruption due to the reuse of kmap_atomic slots.
Since crypto operations were never meant to be used in hard IRQ
contexts, this patch checks for such usage and returns an error
before kmap_atomic is performed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-02-07 09:20:58 +11:00
Jan Glauber 86aa9fc245 [S390] move crypto options and some cleanup.
This patch moves the config options for the s390 crypto instructions
to the standard "Hardware crypto devices" menu. In addition some
cleanup has been done: use a flag for supported keylengths, add a
warning about machien limitation, return ENOTSUPP in case the
hardware has no support, remove superfluous printks and update
email addresses.

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-02-05 21:18:14 +01:00
Al Viro ee36c2bf8e [PATCH] uml problems with linux/io.h
Remove useless includes of linux/io.h, don't even try to build iomap_copy
on uml (it doesn't have readb() et.al., so...)

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-13 09:05:52 -08:00
Herbert Xu 686106ff5e [CRYPTO] sha512: Fix sha384 block size
The SHA384 block size should be 128 bytes, not 96 bytes.  This was
spotted by Andrew Donofrio.

Fortunately the block size isn't actually used anywhere so this typo
has had no real impact.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-11 14:34:33 -08:00
David S. Miller 9ebed9d182 [CRYPTO] lrw: round --> lrw_round
Fixes:

crypto/lrw.c:99: warning: conflicting types for built-in function ‘round’

Signed-off-by: David S. Miller <davem@davemloft.net>
2006-12-06 18:39:00 -08:00
Rik Snel f3d1044cd0 [CRYPTO] tcrypt: LRW test vectors
Do modprobe tcrypt mode=10 to check the included test vectors, they are
from: http://grouper.ieee.org/groups/1619/email/pdf00017.pdf and from
http://www.mail-archive.com/stds-p1619@listserv.ieee.org/msg00173.html.

To make the last test vector fit, I had to increase the buffer size of
input and result to 512 bytes.

Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-06 18:38:58 -08:00
Rik Snel 64470f1b85 [CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode
Main module, this implements the Liskov Rivest Wagner block cipher mode
in the new blockcipher API. The implementation is based on ecb.c.

The LRW-32-AES specification I used can be found at:
http://grouper.ieee.org/groups/1619/email/pdf00017.pdf

It implements the optimization specified as optional in the
specification, and in addition it uses optimized multiplication
routines from gf128mul.c.

Since gf128mul.[ch] is not tested on bigendian, this cipher mode
may currently fail badly on bigendian machines.

Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-06 18:38:56 -08:00
Rik Snel c494e0705d [CRYPTO] lib: table driven multiplications in GF(2^128)
A lot of cypher modes need multiplications in GF(2^128). LRW, ABL, GCM...
I use functions from this library in my LRW implementation and I will
also use them in my ABL (Arbitrary Block Length, an unencumbered (correct
me if I am wrong, wide block cipher mode).

Elements of GF(2^128) must be presented as u128 *, it encourages automatic
and proper alignment.

The library contains support for two different representations of GF(2^128),
see the comment in gf128mul.h. There different levels of optimization
(memory/speed tradeoff).

The code is based on work by Dr Brian Gladman. Notable changes:
- deletion of two optimization modes
- change from u32 to u64 for faster handling on 64bit machines
- support for 'bbe' representation in addition to the, already implemented,
  'lle' representation.
- move 'inline void' functions from header to 'static void' in the
  source file
- update to use the linux coding style conventions

The original can be found at:
http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip

The copyright (and GPL statement) of the original author is preserved.

Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-06 18:38:55 -08:00
Adrian Bunk cc44215eaa [CRYPTO] api: Remove unused functions
This patch removes the following no longer used functions:
- api.c: crypto_alg_available()
- digest.c: crypto_digest_init()
- digest.c: crypto_digest_update()
- digest.c: crypto_digest_final()
- digest.c: crypto_digest_digest()

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-06 18:38:54 -08:00
Adrian Bunk 5b37538a51 [CRYPTO] xcbc: Make needlessly global code static
On Tue, Nov 14, 2006 at 01:41:25AM -0800, Andrew Morton wrote:
>...
> Changes since 2.6.19-rc5-mm2:
>...
>  git-cryptodev.patch
>...
>  git trees
>...

This patch makes some needlessly global code static.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-06 18:38:51 -08:00
Kazunori MIYAZAWA 5b2becf5dc [CRYPTO] tcrypt: Add test vectors of AES_XCBC
est vectors of XCBC with AES-128.

Signed-off-by: Kazunori MIYAZAWA <miyazawa@linux-ipv6.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-06 18:38:50 -08:00
Kazunori MIYAZAWA 333b0d7eea [CRYPTO] xcbc: New algorithm
This is core code of XCBC.

XCBC is an algorithm that forms a MAC algorithm out of a cipher algorithm.
For example, AES-XCBC-MAC is a MAC algorithm based on the AES cipher
algorithm.

Signed-off-by: Kazunori MIYAZAWA <miyazawa@linux-ipv6.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-12-06 18:38:49 -08:00
David Howells 65f27f3844 WorkStruct: Pass the work_struct pointer instead of context data
Pass the work_struct pointer to the work function rather than context data.
The work function can use container_of() to work out the data.

For the cases where the container of the work_struct may go away the moment the
pending bit is cleared, it is made possible to defer the release of the
structure by deferring the clearing of the pending bit.

To make this work, an extra flag is introduced into the management side of the
work_struct.  This governs auto-release of the structure upon execution.

Ordinarily, the work queue executor would release the work_struct for further
scheduling or deallocation by clearing the pending bit prior to jumping to the
work function.  This means that, unless the driver makes some guarantee itself
that the work_struct won't go away, the work function may not access anything
else in the work_struct or its container lest they be deallocated..  This is a
problem if the auxiliary data is taken away (as done by the last patch).

However, if the pending bit is *not* cleared before jumping to the work
function, then the work function *may* access the work_struct and its container
with no problems.  But then the work function must itself release the
work_struct by calling work_release().

In most cases, automatic release is fine, so this is the default.  Special
initiators exist for the non-auto-release case (ending in _NAR).


Signed-Off-By: David Howells <dhowells@redhat.com>
2006-11-22 14:55:48 +00:00
Herbert Xu 43518407d5 [CRYPTO] api: Select cryptomgr where needed
Since cryptomgr is the only way to construct algorithm instances
for now it makes sense to let the templates depend on it as
otherwise it may be left off inadvertently.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-10-16 21:28:58 +10:00
Akinobu Mita 9765d262b8 [CRYPTO] api: fix crypto_alloc_base() return value
This patch makes crypto_alloc_base() return proper return value.

- If kzalloc() failure happens within __crypto_alloc_tfm(),
  crypto_alloc_base() returns NULL. But crypto_alloc_base()
  is supposed to return error code as pointer. So this patch
  makes it return -ENOMEM in that case.

- crypto_alloc_base() is suppose to return -EINTR, if it is
  interrupted by signal. But it may not return -EINTR.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-10-11 22:29:51 +10:00
Alexey Dobriyan d08f74e58c [PATCH] serpent: fix endian warnings
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-10 16:15:33 -07:00
Herbert Xu 73af07de3e [CRYPTO] hmac: Fix error truncation by unlikely()
The error return values are truncated by unlikely so we need to
save it first.  Thanks to Kyle Moffett for spotting this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-23 16:48:46 -07:00
Herbert Xu 5f77043f0f [CRYPTO] hmac: Fix hmac_init update call
The crypto_hash_update call in hmac_init gave the number 1
instead of the length of the sg list in bytes.  This is a
missed conversion from the digest => hash change.

As tcrypt only tests crypto_hash_digest it didn't catch this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-09-23 11:34:43 -07:00
Herbert Xu e4d5b79c66 [CRYPTO] users: Use crypto_comp and crypto_has_*
This patch converts all users to use the new crypto_comp type and the
crypto_has_* functions.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:46:22 +10:00
Herbert Xu fce32d70ba [CRYPTO] api: Add crypto_comp and crypto_has_*
This patch adds the crypto_comp type to complete the compile-time checking
conversion.  The functions crypto_has_alg and crypto_has_cipher, etc. are
also added to replace crypto_alg_available.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:46:21 +10:00
Herbert Xu 8425165dfe [CRYPTO] digest: Remove old HMAC implementation
This patch removes the old HMAC implementation now that nobody uses it
anymore.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:46:20 +10:00
Herbert Xu e9d41164e2 [CRYPTO] tcrypt: Use HMAC template and hash interface
This patch converts tcrypt to use the new HMAC template rather than the
hard-coded version of HMAC.  It also converts all digest users to use
the new cipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:46:18 +10:00
Herbert Xu 0796ae061e [CRYPTO] hmac: Add crypto template implementation
This patch rewrites HMAC as a crypto template.  This means that HMAC is no
longer a hard-coded part of the API.  It's now a template that generates
standard digest algorithms like any other.

The old HMAC is preserved until all current users are converted.

The same structure can be used by other MACs such as AES-XCBC-MAC.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:46:17 +10:00
Herbert Xu 055bcee310 [CRYPTO] digest: Added user API for new hash type
The existing digest user interface is inadequate for support asynchronous
operations.  For one it doesn't return a value to indicate success or
failure, nor does it take a per-operation descriptor which is essential
for the issuing of requests while other requests are still outstanding.

This patch is the first in a series of steps to remodel the interface
for asynchronous operations.

For the ease of transition the new interface will be known as "hash"
while the old one will remain as "digest".

This patch also changes sg_next to allow chaining.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:46:17 +10:00
Herbert Xu 7226bc877a [CRYPTO] api: Mark parts of cipher interface as deprecated
Mark the parts of the cipher interface that have been replaced by
block ciphers as deprecated.  Thanks to Andrew Morton for suggesting
doing this before removing them completely.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:46:16 +10:00
Herbert Xu cba83564d1 [CRYPTO] tcrypt: Use block ciphers where applicable
This patch converts tcrypt to use the new block cipher type where
applicable.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:44:50 +10:00
Herbert Xu a9e62fadf0 [CRYPTO] s390: Added block cipher versions of CBC/ECB
This patch adds block cipher algorithms for S390.  Once all users of the
old cipher type have been converted the existing CBC/ECB non-block cipher
operations will be removed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:44:50 +10:00
Herbert Xu db131ef908 [CRYPTO] cipher: Added block ciphers for CBC/ECB
This patch adds two block cipher algorithms, CBC and ECB.  These
are implemented as templates on top of existing single-block cipher
algorithms.  They invoke the single-block cipher through the new
encrypt_one/decrypt_one interface.

This also optimises the in-place encryption and decryption to remove
the cost of an IV copy each round.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:44:08 +10:00
Herbert Xu 5cde0af2a9 [CRYPTO] cipher: Added block cipher type
This patch adds the new type of block ciphers.  Unlike current cipher
algorithms which operate on a single block at a time, block ciphers
operate on an arbitrarily long linear area of data.  As it is block-based,
it will skip any data remaining at the end which cannot form a block.

The block cipher has one major difference when compared to the existing
block cipher implementation.  The sg walking is now performed by the
algorithm rather than the cipher mid-layer.  This is needed for drivers
that directly support sg lists.  It also improves performance for all
algorithms as it reduces the total number of indirect calls by one.

In future the existing cipher algorithm will be converted to only have
a single-block interface.  This will be done after all existing users
have switched over to the new block cipher type.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:52 +10:00
Herbert Xu 5c64097aa0 [CRYPTO] scatterwalk: Prepare for block ciphers
This patch prepares the scatterwalk code for use by the new block cipher
type.

Firstly it halves the size of scatter_walk on 32-bit platforms.  This
is important as we allocate at least two of these objects on the stack
for each block cipher operation.

It also exports the symbols since the block cipher code can be built as
a module.

Finally there is a hack in scatterwalk_unmap that relies on progress
being made.  Unfortunately, for hardware crypto we can't guarantee
progress to be made since the hardware can fail.

So this also gets rid of the hack by not advancing the address returned
by scatterwalk_map.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:52 +10:00
Herbert Xu f28776a369 [CRYPTO] cipher: Added encrypt_one/decrypt_one
This patch adds two new operations for the simple cipher that encrypts or
decrypts a single block at a time.  This will be the main interface after
the existing block operations have moved over to the new block ciphers.

It also adds the crypto_cipher type which is currently only used on the
new operations but will be extended to setkey as well once existing users
have been converted to use block ciphers where applicable.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:51 +10:00
Herbert Xu e853c3cfa8 [CRYPTO] api: Added crypto_type support
This patch adds the crypto_type structure which will be used for all new
crypto algorithm types, beginning with block ciphers.

The primary purpose of this abstraction is to allow different crypto_type
objects for crypto algorithms of the same type, in particular, there will
be a different crypto_type objects for asynchronous algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:51 +10:00
Herbert Xu 8f21cf0d2b [CRYPTO] api: Feed flag directly to crypto_yield
The sleeping flag used to determine whether crypto_yield can actually
yield is really a per-operation flag rather than a per-tfm flag.  This
patch changes crypto_yield to take a flag directly so that we can start
using a per-operation flag instead the tfm flag.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:50 +10:00
Herbert Xu 6d7d684d63 [CRYPTO] api: Added crypto_alloc_base
Up until now all crypto transforms have been of the same type, struct
crypto_tfm, regardless of whether they are ciphers, digests, or other
types.  As a result of that, we check the types at run-time before
each crypto operation.

This is rather cumbersome.  We could instead use different C types for
each crypto type to ensure that the correct types are used at compile
time.  That is, we would have crypto_cipher/crypto_digest instead of
just crypto_tfm.  The appropriate type would then be required for the
actual operations such as crypto_digest_digest.

Now that we have the type/mask fields when looking up algorithms, it
is easy to request for an algorithm of the precise type that the user
wants.  However, crypto_alloc_tfm currently does not expose these new
attributes.

This patch introduces the function crypto_alloc_base which will carry
these new parameters.  It will be renamed to crypto_alloc_tfm once
all existing users have been converted.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:50 +10:00
Herbert Xu f3f632d61a [CRYPTO] api: Added asynchronous flag
This patch adds the asynchronous flag and changes all existing users to
only look up algorithms that are synchronous.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:49 +10:00
Herbert Xu 7fed0bf271 [CRYPTO] api: Add common instance initialisation code
This patch adds the helpers crypto_get_attr_alg and crypto_alloc_instance
which can be used by simple one-argument templates like hmac to process
input parameters and allocate instances.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:04 +10:00
Herbert Xu df89820ebd [CRYPTO] cipher: Removed special IV checks for ECB
This patch makes IV operations on ECB fail through nocrypt_iv rather than
calling BUG().  This is needed to generalise CBC/ECB using the template
mechanism.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:03 +10:00
Herbert Xu c907ee76d8 [CRYPTO] tcrypt: Use test_hash for crc32c
Now that crc32c has been fixed to conform with standard digest semantics,
we can use test_hash for it.  I've turned the last test into a chunky
test.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:03 +10:00
Herbert Xu ee7564166d [CRYPTO] digest: Store temporary digest in tfm
When the final result location is unaligned, we store the digest in a
temporary buffer before copying it to the final location.  Currently
that buffer sits on the stack.  This patch moves it to an area in the
tfm, just like the CBC IV buffer.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:02 +10:00
Herbert Xu 560c06ae1a [CRYPTO] api: Get rid of flags argument to setkey
Now that the tfm is passed directly to setkey instead of the ctx, we no
longer need to pass the &tfm->crt_flags pointer.

This patch also gets rid of a few unnecessary checks on the key length
for ciphers as the cipher layer guarantees that the key length is within
the bounds specified by the algorithm.

Rather than testing dia_setkey every time, this patch does it only once
during crypto_alloc_tfm.  The redundant check from crypto_digest_setkey
is also removed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:02 +10:00
Herbert Xu 25cdbcd9e5 [CRYPTO] crc32c: Fix unconventional setkey usage
The convention for setkey is that once it is set it should not change,
in particular, init must not wipe out the key set by it.  In fact, init
should always be used after setkey before any digestion is performed.

The only user of crc32c that sets the key is tcrypt.  This patch adds
the necessary init calls there.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:41:01 +10:00
Michal Ludvig b3be9a6d9a [CRYPTO] sha: Add module aliases for sha1 / sha256
Crypto modules should be loadable by their .cra_driver_name, so
we should make MODULE_ALIAS()es with these names. This patch adds
aliases for SHA1 and SHA256 only as that's what we need for
PadLock-SHA driver.

Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:40:20 +10:00
Herbert Xu 6bfd48096f [CRYPTO] api: Added spawns
Spawns lock a specific crypto algorithm in place.  They can then be used
with crypto_spawn_tfm to allocate a tfm for that algorithm.  When the base
algorithm of a spawn is deregistered, all its spawns will be automatically
removed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:39:29 +10:00
Herbert Xu 492e2b63eb [CRYPTO] api: Allow algorithm lookup by type
This patch also adds the infrastructure to pick an algorithm based on
their type.  For example, this allows you to select the encryption
algorithm "aes", instead of any algorithm registered under the name
"aes".  For now this is only accessible internally.  Eventually it
will be made available through crypto_alloc_tfm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:35:17 +10:00
Herbert Xu 2b8c19dbdc [CRYPTO] api: Add cryptomgr
The cryptomgr module is a simple manager of crypto algorithm instances.
It ensures that parameterised algorithms of the type tmpl(alg) (e.g.,
cbc(aes)) are always created.

This is meant to satisfy the needs for most users.  For more complex
cases such as deeper combinations or multiple parameters, a netlink
module will be created which allows arbitrary expressions to be parsed
in user-space.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:31:44 +10:00
Herbert Xu 2825982d9d [CRYPTO] api: Added event notification
This patch adds a notifier chain for algorithm/template registration events.
This will be used to register compound algorithms such as cbc(aes).  In
future this will also be passed onto user-space through netlink.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:17:13 +10:00
Herbert Xu 4cc7720cd1 [CRYPTO] api: Add template registration
A crypto_template generates a crypto_alg object when given a set of
parameters.  this patch adds the basic data structure fo templates
and code to handle their registration/deregistration.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:17:12 +10:00
Herbert Xu cce9e06d10 [CRYPTO] api: Split out low-level API
The crypto API is made up of the part facing users such as IPsec and the
low-level part which is used by cryptographic entities such as algorithms.
This patch splits out the latter so that the two APIs are more clearly
delineated.  As a bonus the low-level API can now be modularised if all
algorithms are built as modules.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:30 +10:00
Herbert Xu 6521f30273 [CRYPTO] api: Add crypto_alg reference counting
Up until now we've relied on module reference counting to ensure that the
crypto_alg structures don't disappear from under us.  This was good enough
as long as each crypto_alg came from exactly one module.

However, with parameterised crypto algorithms a crypto_alg object may need
two or more modules to operate.  This means that we need to count the
references to the crypto_alg object directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:16:29 +10:00
Herbert Xu 72fa491912 [CRYPTO] api: Rename crypto_alg_get to crypto_mod_get
The functions crypto_alg_get and crypto_alg_put operates on the crypto
modules rather than the algorithms.  Therefore it makes sense to call
them crypto_mod_get and crypto_alg_put respectively.

This is needed because we need to have real algorithm reference counters
for parameterised algorithms as they can be unregistered from below by
when their parameter algorithms are themselves unregistered.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2006-09-21 11:16:29 +10:00
Joachim Fritschi eaf44088ff [CRYPTO] twofish: x86-64 assembly version
The patch passed the trycpt tests and automated filesystem tests.
This rewrite resulted in some nice perfomance increase over my last patch.

Short summary of the tcrypt benchmarks:

Twofish Assembler vs. Twofish C (256bit 8kb block CBC)
encrypt: -27% Cycles
decrypt: -23% Cycles

Twofish Assembler vs. AES Assembler (128bit 8kb block CBC)
encrypt: +18%  Cycles
decrypt: +15% Cycles

Twofish Assembler vs. AES Assembler (256bit 8kb block CBC)
encrypt: -9% Cycles
decrypt: -8% Cycles

Full Output:
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-c-x86_64.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-asm-x86_64.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-aes-asm-x86_64.txt


Here is another bonnie++ benchmark with encrypted filesystems. Most runs maxed
out the hd. It should give some idea what the module can do for encrypted filesystem
performance even though you can't see the full numbers.

http://homepages.tu-darmstadt.de/~fritschi/twofish/output_20060610_130806_x86_64.html

Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:29 +10:00
Joachim Fritschi b9f535ffe3 [CRYPTO] twofish: i586 assembly version
The patch passed the trycpt tests and automated filesystem tests.
This rewrite resulted in some nice perfomance increase over my last patch.

Short summary of the tcrypt benchmarks:

Twofish Assembler vs. Twofish C (256bit 8kb block CBC)
encrypt: -33% Cycles
decrypt: -45% Cycles

Twofish Assembler vs. AES Assembler (128bit 8kb block CBC)
encrypt: +3%  Cycles
decrypt: -22% Cycles

Twofish Assembler vs. AES Assembler (256bit 8kb block CBC)
encrypt: -20% Cycles
decrypt: -36% Cycles

Full Output:
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-asm-i586.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-c-i586.txt
http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-aes-asm-i586.txt


Here is another bonnie++ benchmark with encrypted filesystems. All runs with
the twofish assembler modules max out the drivespeed. It should give some
idea what the module can do for encrypted filesystem performance even though
you can't see the full numbers.

http://homepages.tu-darmstadt.de/~fritschi/twofish/output_20060611_205432_x86.html

Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:28 +10:00
Joachim Fritschi 758f570ea7 [CRYPTO] twofish: Fix the priority
This patch adds a proper driver name and priority to the generic c
implemtation to allow coexistance of c and assembler modules.

Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:28 +10:00
Joachim Fritschi 2729bb427f [CRYPTO] twofish: Split out common c code
This patch splits up the twofish crypto routine into a common part ( key
setup  ) which will be uses by all twofish crypto modules ( generic-c , i586
assembler and x86_64 assembler ) and generic-c part. It also creates a new
header file which will be used by all 3 modules.

This eliminates all code duplication.

Correctness was verified with the tcrypt module and automated test scripts.

Signed-off-by: Joachim Fritschi <jfritschi@freenet.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-09-21 11:16:27 +10:00
Herbert Xu b9d0a25a48 [CRYPTO] tcrypt: Forbid tcrypt from being built-in
It makes no sense to build tcrypt into the kernel.  In fact, now that
the driver init function's return status is being checked, it is in
fact harmful to do so.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:42 +10:00
Michal Ludvig e805792851 [CRYPTO] tcrypt: Speed benchmark support for digest algorithms
This patch adds speed tests (benchmarks) for digest algorithms.
Tests are run with different buffer sizes (16 bytes, ... 8 kBytes)
and with each buffer multiple tests are run with different update()
sizes (e.g. hash 64 bytes buffer in four 16 byte updates).
There is no correctness checking of the result and all tests and
algorithms use the same input buffer.

Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:41 +10:00
Michal Ludvig 14fdf477a7 [CRYPTO] tcrypt: Return -EAGAIN from module_init()
Intentionaly return -EAGAIN from module_init() to ensure
it doesn't stay loaded in the kernel.  The module does all
its work from init() and doesn't offer any runtime
functionality => we don't need it in the memory, do we?

Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:41 +10:00
Herbert Xu 996e2523cc [CRYPTO] api: Allow replacement when registering new algorithms
We already allow asynchronous removal of existing algorithm modules.  By
allowing the replacement of existing algorithms, we can replace algorithms
without having to wait for for all existing users to complete.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:41 +10:00
Herbert Xu d913ea0d6b [CRYPTO] api: Removed const from cra_name/cra_driver_name
We do need to change these names now and even more so in future with
instantiated algorithms.  So let's stop lying to the compiler and get
rid of the const modifiers.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:40 +10:00
Herbert Xu c7fc05992a [CRYPTO] api: Added cra_init/cra_exit
This patch adds the hooks cra_init/cra_exit which are called during a tfm's
construction and destruction respectively.  This will be used by the instances
to allocate child tfm's.

For now this lets us get rid of the coa_init/coa_exit functions which are
used for exactly that purpose (unlike the dia_init function which is called
for each transaction).

In fact the coa_exit path is currently buggy as it may get called twice
when an error is encountered during initialisation.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:40 +10:00
Michal Ludvig 110bf1c0e9 [CRYPTO] api: Fixed incorrect passing of context instead of tfm
Fix a few omissions in passing TFM instead of CTX to algorithms.

Signed-off-by: Michal Ludvig <michal@logix.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:40 +10:00
Herbert Xu 6c2bb98bc3 [CRYPTO] all: Pass tfm instead of ctx to algorithms
Up until now algorithms have been happy to get a context pointer since
they know everything that's in the tfm already (e.g., alignment, block
size).

However, once we have parameterised algorithms, such information will
be specific to each tfm.  So the algorithm API needs to be changed to
pass the tfm structure instead of the context pointer.

This patch is basically a text substitution.  The only tricky bit is
the assembly routines that need to get the context pointer offset
through asm-offsets.h.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:39 +10:00
Herbert Xu 43600106e3 [CRYPTO] digest: Remove unnecessary zeroing during init
Various digest algorithms operate one block at a time and therefore
keep a temporary buffer of partial blocks.  This buffer does not need
to be initialised since there is a counter which indicates what is and
isn't valid in it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:38 +10:00
Atsushi Nemoto e1147d8f47 [CRYPTO] digest: Add alignment handling
Some hash modules load/store data words directly.  The digest layer
should pass properly aligned buffer to update()/final() method.  This
patch also add cra_alignmask to some hash modules.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:38 +10:00
Atsushi Nemoto d00e708cef [CRYPTO] khazad: Use 32-bit reads on key
On 64-bit platform, reading 64-bit keys (which is supposed to be
32-bit aligned) at a time will result in unaligned access.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-06-26 17:34:37 +10:00
David McCullough 55e9dce37d [CRYPTO] aes: Fixed array boundary violation
The AES setkey routine writes 64 bytes to the E_KEY area even though
there are only 60 bytes there.  It is in fact safe since E_KEY is
immediately follwed by D_KEY which is initialised afterwards.  However,
doing this may trigger undefined behaviour and makes Coverity unhappy.

So by combining E_KEY and D_KEY into one array we sidestep this issue
altogether.

This problem was reported by Adrian Bunk.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:10 +11:00
Atsushi Nemoto 06b42aa94b [CRYPTO] tcrypt: Fix key alignment
Force 32-bit alignment on keys in tcrypt test vectors.  Also rearrange the
structure to prevent unnecessary padding.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:09 +11:00
Atsushi Nemoto 20ea340489 [CRYPTO] all: Add missing cra_alignmask
The "des3_ede" and "serpent" lack cra_alignmask.

Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:09 +11:00
Eric Sesterhenn bbeb563f7b [CRYPTO] all: Use kzalloc where possible
this patch converts crypto/ to kzalloc usage.
Compile tested with allyesconfig.

Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:08 +11:00
Herbert Xu f10b7897ee [CRYPTO] api: Align tfm context as wide as possible
Since tfm contexts can contain arbitrary types we should provide at least
natural alignment (__attribute__ ((__aligned__))) for them.  In particular,
this is needed on the Xscale which is a 32-bit architecture with a u64 type
that requires 64-bit alignment.  This problem was reported by Ronen Shitrit.

The crypto_tfm structure's size was 44 bytes on 32-bit architectures and
80 bytes on 64-bit architectures.  So adding this requirement only means
that we have to add an extra 4 bytes on 32-bit architectures.

On i386 the natural alignment is 16 bytes which also benefits the VIA
Padlock as it no longer has to manually align its context structure to
128 bits.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:08 +11:00
Denis Vlasenko a5f8c47305 [CRYPTO] twofish: Use rol32/ror32 where appropriate
Convert open coded rotations to rol32/ror32.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-03-21 20:14:08 +11:00
Al Viro 1b8623545b [PATCH] remove bogus asm/bug.h includes.
A bunch of asm/bug.h includes are both not needed (since it will get
pulled anyway) and bogus (since they are done too early).  Removed.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2006-02-07 20:56:35 -05:00
Herbert Xu a429d2609c [CRYPTO] cipher: Set alignmask for multi-byte loads
Many cipher implementations use 4-byte/8-byte loads/stores which require
alignment on some architectures.  This patch explicitly sets the alignment
requirements for them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:16:00 -08:00
Herbert Xu 7302533aac [CRYPTO] api: Require block size to be less than PAGE_SIZE/8
The cipher code path may allocate up to two blocks of data on the stack.
Therefore we need to place limits on the maximum block size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:58 -08:00
Herbert Xu bcb0ad2b34 [CRYPTO] sha1: Fixed off-by-64 bug in sha1_update
After a partial update, the done pointer is off to the right by 64 bytes.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:56 -08:00
Herbert Xu 827c3911d8 [CRYPTO] cipher: Align temporary buffer in cbc_process_decrypt
Since the temporary buffer is used as an argument to cia_decrypt, it must be
aligned by cra_alignmask.  This bug was found by linux@horizon.com.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:49 -08:00
Nicolas Pitre fa9b98fdab [CRYPTO] sha1: Avoid shifting count left and right
This patch avoids shifting the count left and right needlessly for each
call to sha1_update().  It instead can be done only once at the end in
sha1_final().

Keeping the previous test example (sha1_update() successively called with
len=64), a 1.3% performance increase can be observed on i386, or 0.2% on
ARM.  The generated code is also smaller on ARM.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:46 -08:00
Nicolas Pitre 9d70a6c86c [CRYPTO] sha1: Rename i/j to done/partial
This patch gives more descriptive names to the variables i and j.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:44 -08:00
Nicolas Pitre cfa8d17cc8 [CRYPTO] sha1: Avoid useless memcpy()
The current code unconditionally copy the first block for every call to
sha1_update().  This can be avoided if there is no pending partial block.
This is always the case on the first call to sha1_update() (if the length
is >= 64 of course.

Furthermore, temp does need to be called if sha_transform is never invoked.
Also consolidate the sha_transform calls into one to reduce code size.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:41 -08:00
Herbert Xu c8a19c91b5 [CRYPTO] Allow AES C/ASM implementations to coexist
As the Crypto API now allows multiple implementations to be registered
for the same algorithm, we no longer have to play tricks with Kconfig
to select the right AES implementation.

This patch sets the driver name and priority for all the AES
implementations and removes the Kconfig conditions on the C implementation
for AES.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:39 -08:00
Herbert Xu 5cb1454b86 [CRYPTO] Allow multiple implementations of the same algorithm
This is the first step on the road towards asynchronous support in
the Crypto API.  It adds support for having multiple crypto_alg objects
for the same algorithm registered in the system.

For example, each device driver would register a crypto_alg object
for each algorithm that it supports.  While at the same time the
user may load software implementations of those same algorithms.

Users of the Crypto API may then select a specific implementation
by name, or choose any implementation for a given algorithm with
the highest priority.

The priority field is a 32-bit signed integer.  In future it will be
possible to modify it from user-space.

This also provides a solution to the problem of selecting amongst
various AES implementations, that is, aes vs. aes-i586 vs. aes-padlock.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:37 -08:00
Herbert Xu 06ace7a9ba [CRYPTO] Use standard byte order macros wherever possible
A lot of crypto code needs to read/write a 32-bit/64-bit words in a
specific gender.  Many of them open code them by reading/writing one
byte at a time.  This patch converts all the applicable usages over
to use the standard byte order macros.

This is based on a previous patch by Denis Vlasenko.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2006-01-09 14:15:34 -08:00
Martin Schwidefsky 347a8dc3b8 [PATCH] s390: cleanup Kconfig
Sanitize some s390 Kconfig options.  We have ARCH_S390, ARCH_S390X,
ARCH_S390_31, 64BIT, S390_SUPPORT and COMPAT.  Replace these 6 options by
S390, 64BIT and COMPAT.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:53 -08:00
Jan Glauber 05f29fcdb0 [PATCH] s390: in-kernel crypto test vectors
Add new test vectors to the AES test suite for AES CBC and AES with plaintext
larger than AES blocksize.

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:51 -08:00
Jan Glauber bf754ae8ef [PATCH] s390: aes support
Add support for the hardware accelerated AES crypto algorithm.

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:50 -08:00
Jan Glauber 0a497c17fe [PATCH] s390: sha256 support
Add support for the hardware accelerated sha256 crypto algorithm.

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:50 -08:00
Jan Glauber c1e26e1ef7 [PATCH] s390: in-kernel crypto rename
Replace all references to z990 by s390 in the in-kernel crypto files in
arch/s390/crypto.  The code is not specific to a particular machine (z990) but
to the s390 platform.  Big diff, does nothing..

Signed-off-by: Jan Glauber <jan.glauber@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06 08:33:50 -08:00
Herbert Xu 1b40efd772 [CRYPTO] Check cra_alignmask against cra_blocksize
The cipher code relies on the fact that the block size is a multiple
of the required alignment.  So we should check this at the time of
algorith registration.  We also ensure that the block size is bounded
by the page size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2005-10-30 11:19:43 +11:00
Herbert Xu 6df5b9f48d [CRYPTO] Simplify one-member scatterlist expressions
This patch rewrites various occurences of &sg[0] where sg is an array
of length one to simply sg.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2005-10-30 11:19:43 +11:00
David Hardeman 378f058cc4 [PATCH] Use sg_set_buf/sg_init_one where applicable
This patch uses sg_set_buf/sg_init_one in some places where it was
duplicated.

Signed-off-by: David Hardeman <david@2gen.com>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Cc: Greg KH <greg@kroah.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Garzik <jgarzik@pobox.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2005-10-30 11:19:43 +11:00
Herbert Xu fe2d5295a1 [CRYPTO] Fix boundary check in standard multi-block cipher processors
The boundary check in the standard multi-block cipher processors are
broken when nbytes is not a multiple of bsize.  In those cases it will
always process an extra block.

This patch corrects the check so that it processes at most nbytes of
data.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-06 14:49:44 -07:00
Herbert Xu 64baf3cfea [CRYPTO]: Added CRYPTO_TFM_REQ_MAY_SLEEP flag
The crypto layer currently uses in_atomic() to determine whether it is
allowed to sleep.  This is incorrect since spin locks don't always cause
in_atomic() to return true.

Instead of that, this patch returns to an earlier idea of a per-tfm flag
which determines whether sleeping is allowed.  Unlike the earlier version,
the default is to not allow sleeping.  This ensures that no existing code
can break.

As usual, this flag may either be set through crypto_alloc_tfm(), or
just before a specific crypto operation.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-01 17:43:05 -07:00
Aaron Grothe fb4f10ed50 [CRYPTO]: Fix XTEA implementation
The XTEA implementation was incorrect due to a misinterpretation of
operator precedence.  Because of the wide-spread nature of this
error, the erroneous implementation will be kept, albeit under the
new name of XETA.

Signed-off-by: Aaron Grothe <ajgrothe@yahoo.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-01 17:42:46 -07:00
Jesper Juhl 77933d7276 [PATCH] clean up inline static vs static inline
`gcc -W' likes to complain if the static keyword is not at the beginning of
the declaration.  This patch fixes all remaining occurrences of "inline
static" up with "static inline" in the entire kernel tree (140 occurrences in
47 files).

While making this change I came across a few lines with trailing whitespace
that I also fixed up, I have also added or removed a blank line or two here
and there, but there are no functional changes in the patch.

Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-27 16:26:20 -07:00
Herbert Xu 9d853c3757 [CRYPTO]: Fix zero-extension bug on 64-bit architectures.
Noticed by Ken-ichirou MATSUZAWA.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-15 07:41:31 -07:00
Dag Arne Osvik e1d5dea1df [CRYPTO] Add faster DES code from Dag Arne Osvik
I've made a new implementation of DES to replace the old one in the kernel.
It provides faster encryption on all tested processors apart from the original
Pentium, and key setup is many times faster.

                                Speed relative to old kernel implementation
Processor       des_setkey      des_encrypt     des3_ede_setkey des3_ede_encrypt
Pentium
120Mhz          6.8             0.82            7.2             0.86
Pentium III
1.266Ghz        5.6             1.19            5.8             1.34
Pentium M
1.3Ghz          5.7             1.15            6.0             1.31
Pentium 4
2.266Ghz        5.8             1.24            6.0             1.40
Pentium 4E
3Ghz            5.4             1.27            5.5             1.48
StrongARM 1110
206Mhz          4.3             1.03            4.4             1.14
Athlon XP
2Ghz            7.8             1.44            8.1             1.61
Athlon 64
2Ghz            7.8             1.34            8.3             1.49

Signed-off-by: Dag Arne Osvik <da@osvik.no>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:55:44 -07:00
Herbert Xu a9df3597fe [CRYPTO] Remove unused iv field from context structure
The iv field in des_ctx/des3_ede_ctx/serpent_ctx has never been used.
This was noticed by Dag Arne Osvik.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:55:21 -07:00
Andreas Steinmetz a2a892a236 [CRYPTO] Add x86_64 asm AES
Implementation:
===============
The encrypt/decrypt code is based on an x86 implementation I did a while
ago which I never published. This unpublished implementation does
include an assembler based key schedule and precomputed tables. For
simplicity and best acceptance, however, I took Gladman's in-kernel code
for table generation and key schedule for the kernel port of my
assembler code and modified this code to produce the key schedule as
required by my assembler implementation. File locations and Kconfig are
kept similar to the i586 AES assembler implementation.
It may seem a little bit strange to use 32 bit I/O and registers in the
assembler implementation but this gives the best code size. My
implementation takes one instruction more per round compared to
Gladman's x86 assembler but it doesn't require any stack for local
variables or saved registers and it is less serialized than Gladman's
code.
Note that all comparisons to Gladman's code were done after my code was
implemented. I did only use FIPS PUB 197 for the implementation so my
implementation is independent work.
If anybody has a better assembler solution for x86_64 I'll be pleased to
have my code replaced with the better solution.

Testing:
========
The implementation passes the in-kernel crypto testing module and I'm
running it without any problems on my laptop where it is mainly used for
dm-crypt.

Microbenchmark:
===============
The microbenchmark was done in userspace with similar compile flags as
used during kernel compile.
Encrypt/decrypt is about 35% faster than the generic C implementation.
As the generic C as well as my assembler implementation are both table
I don't really expect that there is much room for further
improvements though I'll be glad to be corrected here.
The key schedule is about 5% slower than the generic C implementation.
This is due to the fact that some more work has to be done in the key
schedule routine to fit the schedule to the assembler implementation.

Code Size:
==========
Encrypt and decrypt are together about 2.1 Kbytes smaller than the
generic C implementation which is important with regard to L1 cache
usage. The key schedule routine is about 100 bytes larger than the
generic C implementation.

Data Size:
==========
There's no difference in data size requirements between the assembler
implementation and the generic C implementation.

License:
========
Gladmans's code is dual BSD/GPL whereas my assembler code is GPLv2 only
(I'm  not going to change the license for my code). So I had to change
the module license for the x86_64 aes module from 'Dual BSD/GPL' to
'GPL' to reflect the most restrictive license within the module.

Signed-off-by: Andreas Steinmetz <ast@domdv.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:55:00 -07:00
Jesper Juhl a61cc44812 [CRYPTO] Add null short circuit to crypto_free_tfm
As far as I'm aware there's a general concensus that functions that are
responsible for freeing resources should be able to cope with being passed
a NULL pointer. This makes sense as it removes the need for all callers to
check for NULL, thus elliminating the bugs that happen when some forget
(safer to just check centrally in the freeing function) and it also makes
for smaller code all over due to the lack of all those NULL checks.
This patch makes it safe to pass the crypto_free_tfm() function a NULL
pointer. Once this patch is applied we can start removing the NULL checks
from the callers.

Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:54:31 -07:00
Herbert Xu 915e8561d5 [CRYPTO] Handle unaligned iv from encrypt_iv/decrypt_iv
Even though cit_iv is now always aligned, the user can still supply an
unaligned iv through crypto_cipher_encrypt_iv/crypto_cipher_decrypt_iv.
This patch will check the alignment of the user-supplied iv and copy
it if necessary.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:53:47 -07:00
Herbert Xu fbdae9f3e7 [CRYPTO] Ensure cit_iv is aligned correctly
This patch ensures that cit_iv is aligned according to cra_alignmask
by allocating it as part of the tfm structure.  As a side effect the
crypto layer will also guarantee that the tfm ctx area has enough space
to be aligned by cra_alignmask.  This allows us to remove the extra
space reservation from the Padlock driver.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:53:29 -07:00
Adrian Bunk 176c3652c5 [CRYPTO] Make crypto_alg_lookup static
This patch makes a needlessly global function static.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:53:09 -07:00
Herbert Xu 9547737799 [CRYPTO] Add alignmask for low-level cipher implementations
The VIA Padlock device requires the input and output buffers to
be aligned on 16-byte boundaries.  This patch adds the alignmask
attribute for low-level cipher implementations to indicate their
alignment requirements.

The mid-level crypt() function will copy the input/output buffers
if they are not aligned correctly before they are passed to the
low-level implementation.

Strictly speaking, some of the software implementations require
the buffers to be aligned on 4-byte boundaries as they do 32-bit
loads.  However, it is not clear whether it is better to copy
the buffers or pay the penalty for unaligned loads/stores.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:52:09 -07:00
Herbert Xu 40725181b7 [CRYPTO] Add support for low-level multi-block operations
This patch adds hooks for cipher algorithms to implement multi-block
ECB/CBC operations directly.  This is expected to provide significant
performance boots to the VIA Padlock.

It could also be used for improving software implementations such as
AES where operating on multiple blocks at a time may enable certain
optimisations.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:51:52 -07:00
Herbert Xu c774e93e21 [CRYPTO] Add plumbing for multi-block operations
The VIA Padlock device is able to perform much better when multiple
blocks are fed to it at once.  As this device offers an exceptional
throughput rate it is worthwhile to optimise the infrastructure
specifically for it.

We shift the existing page-sized fast path down to the CBC/ECB functions.
We can then replace the CBC/ECB functions with functions provided by the
underlying algorithm that performs the multi-block operations.

As a side-effect this improves the performance of large cipher operations
for all existing algorithm implementations.  I've measured the gain to be
around 5% for 3DES and 15% for AES.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:51:31 -07:00
Jesper Juhl 8279dd748f [CRYPTO] Don't check for NULL before kfree()
Checking a pointer for NULL before calling kfree() on it is redundant.
This patch removes such checks from crypto/

Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-07-06 13:51:00 -07:00
Herbert Xu 6a17944ca1 [CRYPTO]: Use CPU cycle counters in tcrypt
After using this facility for a while to test my changes to the
cipher crypt() layer, I realised that I should've listend to Dave
and made this thing use CPU cycle counters :) As it is it's too
jittery for me to feel safe about relying on the results.

So here is a patch to make it use CPU cycles by default but fall
back to jiffies if the user specifies a non-zero sec value.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:29:03 -07:00
Herbert Xu dce907c00f [CRYPTO]: Use template keys for speed tests if possible
The existing keys used in the speed tests do not pass the 3DES quality check.
This patch makes it use the template keys instead.

Other algorithms can supply template keys through the same interface if needed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:27:51 -07:00
Harald Welte ebfd9bcf16 [CRYPTO]: Add cipher speed tests
From: Reyk Floeter <reyk@vantronix.net>

I recently had the requirement to do some benchmarking on cryptoapi, and
I found reyk's very useful performance test patch [1].

However, I could not find any discussion on why that extension (or
something providing a similar feature but different implementation) was
not merged into mainline.  If there was such a discussion, can someone
please point me to the archive[s]?

I've now merged the old patch into 2.6.12-rc1, the result can be found
attached to this email.

[1] http://lists.logix.cz/pipermail/padlock/2004/000010.html

Signed-off-by: Harald Welte <laforge@gnumonks.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:27:23 -07:00
Herbert Xu 3cc3816f93 [CRYPTO]: Kill unnecessary strncpy from tcrypt
It seems that bad code tends to get copied (see test_cipher_speed).  So let's
kill this idiom before it spreads any further.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:26:36 -07:00
Herbert Xu ef2736fc74 [CRYPTO]: White space and coding style clean up in tcrypt
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-06-22 13:26:03 -07:00
Herbert Xu 15333038d5 [CRYPTO]: Only reschedule if !in_atomic()
The netlink gfp_any() problem made me double-check the uses of in_softirq()
in crypto/*.  It seems to me that we should be checking in_atomic() instead
of in_softirq() in crypto_yield.  Otherwise people calling the crypto ops
with spin locks held or preemption disabled will get burnt, right?

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-05-23 12:36:25 -07:00
Patrick McHardy d0856009db [PATCH] crypto: fix null encryption/compression
null_encrypt() needs to copy the data in case src and dst are disjunct,
null_compress() needs to copy the data in any case as far as I can tell.  I
joined compress/decompress and encrypt/decrypt to avoid duplicating code.

Without this patch ESP null_enc packets look like this:

IP (tos 0x0, ttl  64, id 23130, offset 0, flags [DF], length: 128)
10.0.0.1 > 10.0.0.2: ESP(spi=0x0f9ca149,seq=0x4)
	0x0000:  4500 0080 5a5a 4000 4032 cbef 0a00 0001  E...ZZ@.@2......
	0x0010:  0a00 0002 0f9c a149 0000 0004 0000 0000  .......I........
	0x0020:  0000 0000 0000 0000 0000 0000 0000 0000  ................
	0x0030:  0000 0000 0000 0000 0000 0000 0000 0000  ................
	0x0040:  0000 0000 0000 0000 0000 0000 0000 0000  ................
	0x0050:  0000                                     ..

IP (tos 0x0, ttl  64, id 256, offset 0, flags [DF], length: 128)
10.0.0.2 > 10.0.0.1: ESP(spi=0x0e4f7b51,seq=0x2)
	0x0000:  4500 0080 0100 4000 4032 254a 0a00 0002  E.....@.@2%J....
	0x0010:  0a00 0001 0e4f 7b51 0000 0002 a8a8 a8a8  .....O{Q........
	0x0020:  a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8  ................
	0x0030:  a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8  ................
	0x0040:  a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8 a8a8  ................
	0x0050:  a8a8                                     ..

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-17 07:59:18 -07:00
Paolo 'Blaisorblade' Giarrusso c45166be3c [PATCH] uml: support AES i586 crypto driver
We want to make possible, for the user, to enable the i586 AES implementation.
This requires a restructure.

- Add a CONFIG_UML_X86 to notify that we are building a UML for i386.

- Rename CONFIG_64_BIT to CONFIG_64BIT as is used for all other archs

- Tell crypto/Kconfig that UML_X86 is as good as X86

- Tell it that it must exclude not X86_64 but 64BIT, which will give the
  same results.

- Tell kbuild to descend down into arch/i386/crypto/ to build what's needed.

Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-01 08:58:54 -07:00
Artem B. Bityuckiy 9ffb7146f0 [PATCH] crypto: call zlib end functions on deflate exit path
In the deflate_[compress|uncompress|pcompress] functions we call the
zlib_[in|de]flateReset function at the beginning.  This is OK.  But when we
unload the deflate module we don't call zlib_[in|de]flateEnd to free all
the zlib internal data.  It looks like a bug for me.  Please, consider the
attached patch.

Signed-off-by: Artem B. Bityuckiy <dedekind@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16 15:23:58 -07:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00