Граф коммитов

478 Коммитов

Автор SHA1 Сообщение Дата
Herbert Xu 67412f0e78 [CRYPTO] hmac: Avoid calling virt_to_page on key
When HMAC gets a key longer than the block size of the hash, it needs
to feed it as input to the hash to reduce it to a fixed length.  As
it is HMAC converts the key to a scatter and gather list.  However,
this doesn't work on certain platforms if the key is not allocated
via kmalloc.  For example, the keys from tcrypt are stored in the
rodata section and this causes it to fail with HMAC on x86-64.

This patch fixes this by copying the key to memory obtained via
kmalloc before hashing it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-05-07 21:08:56 +08:00
Julia Lawall b1145ce395 [CRYPTO] cryptd: Correct kzalloc error test
Normally, kzalloc returns NULL or a valid pointer value, not a value to be
tested using IS_ERR.

Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-05-01 18:22:28 +08:00
Herbert Xu 46f8153cc5 [CRYPTO] eseqiv: Fix off-by-one encryption
After attaching the IV to the head during encryption, eseqiv does not
increase the encryption length by that amount.  As such the last block
of the actual plain text will be left unencrypted.

Fortunately the only user of this code hifn currently crashes so this
shouldn't affect anyone :)

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-05-01 18:22:28 +08:00
Patrick McHardy 161613293f [CRYPTO] authenc: Fix async crypto crash in crypto_authenc_genicv()
crypto_authenc_givencrypt_done uses req->data as struct aead_givcrypt_request,
while it really points to a struct aead_request, causing this crash:

BUG: unable to handle kernel paging request at 6b6b6b6b
IP: [<dc87517b>] :authenc:crypto_authenc_genicv+0x23/0x109
*pde = 00000000
Oops: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
Modules linked in: hifn_795x authenc esp4 aead xfrm4_mode_tunnel sha1_generic hmac crypto_hash]

Pid: 3074, comm: ping Not tainted (2.6.25 #4)
EIP: 0060:[<dc87517b>] EFLAGS: 00010296 CPU: 0
EIP is at crypto_authenc_genicv+0x23/0x109 [authenc]
EAX: daa04690 EBX: daa046e0 ECX: dab0a100 EDX: daa046b0
ESI: 6b6b6b6b EDI: dc872054 EBP: c033ff60 ESP: c033ff0c
 DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
Process ping (pid: 3074, ti=c033f000 task=db883a80 task.ti=dab6c000)
Stack: 00000000 daa046b0 c0215a3e daa04690 dab0a100 00000000 ffffffff db9fd7f0
       dba208c0 dbbb1720 00000001 daa04720 00000001 c033ff54 c0119ca9 dc852a75
       c033ff60 c033ff60 daa046e0 00000000 00000001 c033ff6c dc87527b 00000001
Call Trace:
 [<c0215a3e>] ? dev_alloc_skb+0x14/0x29
 [<c0119ca9>] ? printk+0x15/0x17
 [<dc87527b>] ? crypto_authenc_givencrypt_done+0x1a/0x27 [authenc]
 [<dc850cca>] ? hifn_process_ready+0x34a/0x352 [hifn_795x]
 [<dc8353c7>] ? rhine_napipoll+0x3f2/0x3fd [via_rhine]
 [<dc851a56>] ? hifn_check_for_completion+0x4d/0xa6 [hifn_795x]
 [<dc851ab9>] ? hifn_tasklet_callback+0xa/0xc [hifn_795x]
 [<c011d046>] ? tasklet_action+0x3f/0x66
 [<c011d230>] ? __do_softirq+0x38/0x7a
 [<c0105a5f>] ? do_softirq+0x3e/0x71
 [<c011d17c>] ? irq_exit+0x2c/0x65
 [<c010e0c0>] ? smp_apic_timer_interrupt+0x5f/0x6a
 [<c01042e4>] ? apic_timer_interrupt+0x28/0x30
 [<dc851640>] ? hifn_handle_req+0x44a/0x50d [hifn_795x]
 ...

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-05-01 18:22:28 +08:00
Sebastian Siewior 584fffc8b1 [CRYPTO] kconfig: Ordering cleanup
Ciphers, block modes, name it, are grouped together and sorted.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:34 +08:00
Kamalesh Babulal 3af5b90bde [CRYPTO] all: Clean up init()/fini()
On Thu, Mar 27, 2008 at 03:40:36PM +0100, Bodo Eggert wrote:
> Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> wrote:
> 
> > This patch cleanups the crypto code, replaces the init() and fini()
> > with the <algorithm name>_init/_fini
> 
> This part ist OK.
> 
> > or init/fini_<algorithm name> (if the 
> > <algorithm name>_init/_fini exist)
> 
> Having init_foo and foo_init won't be a good thing, will it? I'd start
> confusing them.
> 
> What about foo_modinit instead?

Thanks for the suggestion, the init() is replaced with

	<algorithm name>_mod_init ()

and fini () is replaced with <algorithm name>_mod_fini.
 
Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:34 +08:00
Sebastian Siewior 5427663f49 [CRYPTO] aes: Export generic setkey
The key expansion routine could be get little more generic, become
a kernel doc entry and then get exported.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Tested-by: Stefan Hellermann <stefan@the2masters.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:34 +08:00
Sebastian Siewior c3715cb90f [CRYPTO] api: Make the crypto subsystem fully modular
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:23 +08:00
Kevin Coffman 76cb952179 [CRYPTO] cts: Add CTS mode required for Kerberos AES support
Implement CTS wrapper for CBC mode required for support of AES
encryption support for Kerberos (rfc3962).

Signed-off-by: Kevin Coffman <kwc@citi.umich.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:23 +08:00
Marcin Slusarz fd4609a8e0 [CRYPTO] lrw: Replace all adds to big endians variables with be*_add_cpu
replace all:
big_endian_variable = cpu_to_beX(beX_to_cpu(big_endian_variable) +
					expression_in_cpu_byteorder);
with:
	beX_add_cpu(&big_endian_variable, expression_in_cpu_byteorder);

Signed-off-by: Marcin Slusarz <marcin.slusarz@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Roel Kluin <12o3l@tiscali.nl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:22 +08:00
Sebastian Siewior f0df30b1f7 [CRYPTO] tcrypt: Change the XTEA test vectors
The third test vector of ECB-XTEA-ENC fails for me all other
are fine. I could not find a RFC or something else where they
are defined. The test vector has not been modified since git
started recording histrory. The implementation is very close
(not to say equal) to what is available as Public Domain (they
recommend 64 rounds and the in kernel uses 32). Therefore I
belive that there is typo somewhere and tcrypt reported always
*fail* instead of *okey*.
This patch replaces input + result of the third test vector with
result + input from the third decryption vector. The key is the
same, the other three test vectors are also the reverse.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:22 +08:00
Sebastian Siewior de224c309b [CRYPTO] tcrypt: Shrink the tcrypt module
Currently the tcrypt module is about 2 MiB on x86-32. The
main reason for the huge size is the data segment which contains
all the test vectors for each algorithm. The test vectors are
staticly allocated in an array and the size of the array has been
drastically increased by the merge of the Salsa20 test vectors.

With a hint from Benedigt Spranger I found a way how I could
convert those fixed-length arrays to strings which are flexible
in size. VIM and regex were also very helpfull :)
So, I am talking about a shrinking of ~97% on x86-32:

   text    data     bss     dec     hex filename
  18309 2039708      20 2058037  1f6735 tcrypt-b4.ko
  45628   23516      80   69224   10e68 tcrypt.ko

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:22 +08:00
Sebastian Siewior 562954d5e0 [CRYPTO] tcrypt: Change the usage of the test vectors
The test routines (test_{cipher,hash,aead}) are makeing a copy
of the test template and are processing the encryption process
in place. This patch changes the creation of the copy so it will
work even if the source address of the input data isn't an array
inside of the template but a pointer.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:22 +08:00
Jan Engelhardt 48c8949ea8 [CRYPTO] api: Constify function pointer tables
Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:22 +08:00
Sebastian Siewior d5dc392742 [CRYPTO] tcrypt: Shrink speed templates
The speed templates as it look always the same. The key size
is repeated for each block size and we test always the same
block size. The addition of one inner loop makes it possible
to get rid of the struct and it is possible to use a tiny
u8 array :)

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:21 +08:00
Sebastian Siewior 477035c2ab [CRYPTO] tcrypt: Group common speed templates
Some crypto ciphers which are impleneted support similar key sizes
(16,24 & 32 byte). They can be grouped together and use a common
templatte instead of their own which contains the same data.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:21 +08:00
Jan Glauber 78f8b3a240 [CRYPTO] sha512: Rename sha512 to sha512_generic
Rename sha512 to sha512_generic and add a MODULE_ALIAS for sha512
so all sha512 implementations can be loaded automatically.

Keep the broken tabs so git recognizes this as a rename.

Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:19:21 +08:00
Alexey Dobriyan 607424d858 [CRYPTO] api: Switch to proc_create()
Signed-off-by: Alexey Dobriyan <adobriyan@sw.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-21 10:15:56 +08:00
Dan Williams 636bdeaa12 dmaengine: ack to flags: make use of the unused bits in the 'ack' field
'ack' is currently a simple integer that flags whether or not a client is done
touching fields in the given descriptor.  It is effectively just a single bit
of information.  Converting this to a flags parameter allows the other bits to
be put to use to control completion actions, like dma-unmap, and capture
results, like xor-zero-sum == 0.

Changes are one of:
1/ convert all open-coded ->ack manipulations to use async_tx_ack
   and async_tx_test_ack.
2/ set the ack bit at prep time where possible
3/ make drivers store the flags at prep time
4/ add flags to the device_prep_dma_interrupt prototype

Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-04-17 13:25:54 -07:00
Dan Williams 19242d7233 async_tx: fix multiple dependency submission
Shrink struct dma_async_tx_descriptor and introduce
async_tx_channel_switch to properly inject a channel switch interrupt in
the descriptor stream.  This simplifies the locking model as drivers no
longer need to handle dma_async_tx_descriptor.lock.

Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-04-17 13:25:05 -07:00
Joy Latten 1edcf2e1ee [CRYPTO] xcbc: Fix crash when ipsec uses xcbc-mac with big data chunk
The kernel crashes when ipsec passes a udp packet of about 14XX bytes
of data to aes-xcbc-mac.

It seems the first xxxx bytes of the data are in first sg entry,
and remaining xx bytes are in next sg entry. But we don't 
check next sg entry to see if we need to go look the page up.

I noticed in hmac.c, we do a scatterwalk_sg_next(), to do this check
and possible lookup, thus xcbc.c needs to use this routine too.

A 15-hour run of an ipsec stress test sending streams of tcp and
udp packets of various sizes,  using this patch and 
aes-xcbc-mac completed successfully, so hopefully this fixes the
problem.
 
Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-04-02 14:36:09 +08:00
Dan Williams 8d8002f642 async_tx: avoid the async xor_zero_sum path when src_cnt > device->max_xor
If the channel cannot perform the operation in one call to
->device_prep_dma_zero_sum, then fallback to the xor+page_is_zero path.
This only affects users with arrays larger than 16 devices on iop13xx or
32 devices on iop3xx.

Cc: <stable@kernel.org>
Cc: Neil Brown <neilb@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-18 17:01:00 -07:00
Dan Williams 3280ab3e88 async_tx: checkpatch says s/__FUNCTION__/__func__/g
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-03-13 10:57:10 -07:00
Herbert Xu f13ba2f7d3 [CRYPTO] skcipher: Fix section mismatches
The previous patch to move chainiv and eseqiv into blkcipher created
a section mismatch for the chainiv exit function which was also called
from __init.  This patch removes the __exit marking on it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-03-08 20:29:43 +08:00
Joy Latten 2f40a178e7 [CRYPTO] xcbc: Fix crash with IPsec
When using aes-xcbc-mac for authentication in IPsec, 
the kernel crashes. It seems this algorithm doesn't 
account for the space IPsec may make in scatterlist for authtag.
Thus when crypto_xcbc_digest_update2() gets called,
nbytes may be less than sg[i].length. 
Since nbytes is an unsigned number, it wraps
at the end of the loop allowing us to go back 
into loop and causing crash in memcpy.

I used update function in digest.c to model this fix.
Please let me know if it looks ok.

Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-03-06 19:28:44 +08:00
Sebastian Siewior 6212f2c7f7 [CRYPTO] xts: Use proper alignment
The XTS blockmode uses a copy of the IV which is saved on the stack
and may or may not be properly aligned. If it is not, it will break
hardware cipher like the geode or padlock.
This patch encrypts the IV in place so we don't have to worry about
alignment.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Tested-by: Stefan Hellermann <stefan@the2masters.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-03-06 18:56:19 +08:00
Adrian Bunk bc97f19dc8 [CRYPTO] digest: Include internal.h for prototypes
Every file should include the headers containing the externs for its 
global code (in this case for struct crypto_{init,exit}_digest_ops()).

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-03-05 19:05:54 +08:00
Herbert Xu 3e16bfbaf3 [CRYPTO] authenc: Add missing Kconfig dependency on BLKCIPHER
The authenc algorithm requires BLKCIPHER to be present.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-02-23 11:13:00 +08:00
Herbert Xu 76fc60a2e3 [CRYPTO] skcipher: Move chainiv/seqiv into crypto_blkcipher module
For compatibility with dm-crypt initramfs setups it is useful to merge
chainiv/seqiv into the crypto_blkcipher module.  Since they're required
by most algorithms anyway this is an acceptable trade-off.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-02-23 11:12:06 +08:00
Adrian Bunk c8620c2590 [CRYPTO] null: Add missing Kconfig dependency on BLKCIPHER
This patch fixes the following build error caused by commit 
3631c650c495d61b1dabf32eb26b46873636e918:

<--  snip  -->

...
  LD      .tmp_vmlinux1
crypto/built-in.o: In function `skcipher_null_crypt':
crypto_null.c:(.text+0x3d14): undefined reference to `blkcipher_walk_virt'
crypto_null.c:(.text+0x3d14): relocation truncated to fit: R_MIPS_26 against `blkcipher_walk_virt'
crypto/built-in.o: In function `$L32':
crypto_null.c:(.text+0x3d54): undefined reference to `blkcipher_walk_done'
crypto_null.c:(.text+0x3d54): relocation truncated to fit: R_MIPS_26 against `blkcipher_walk_done'
crypto/built-in.o:(.data+0x2e8): undefined reference to `crypto_blkcipher_type'
make[1]: *** [.tmp_vmlinux1] Error 1

<--  snip  -->

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-02-18 09:00:05 +08:00
Frederik Deweerdt 242f1a3437 [CRYPTO] tcrypt: Add missing Kconfig dependency on BLKCIPHER
Building latest git fails with the following error:
	ERROR: "crypto_alloc_ablkcipher" [crypto/tcrypt.ko] undefined!
This appears to happen because CONFIG_CRYPTO_TEST is set while
CONFIG_CRYPTO_BLKCIPHER is not.
The following patch fixes the problem for me.

Signed-off-by: Frederik Deweerdt <frederik.deweerdt@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-02-15 19:19:33 +08:00
David Howells e231c2ee64 Convert ERR_PTR(PTR_ERR(p)) instances to ERR_CAST(p)
Convert instances of ERR_PTR(PTR_ERR(p)) to ERR_CAST(p) using:

perl -spi -e 's/ERR_PTR[(]PTR_ERR[(](.*)[)][)]/ERR_CAST(\1)/' `grep -rl 'ERR_PTR[(]*PTR_ERR' fs crypto net security`

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 08:42:26 -08:00
Dan Williams 47437b2c9a async_tx: allow architecture specific async_tx_find_channel implementations
The source and destination addresses are included to allow channel
selection based on address alignment.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
2008-02-06 10:12:18 -07:00
Dan Williams d4c56f97ff async_tx: replace 'int_en' with operation preparation flags
Pass a full set of flags to drivers' per-operation 'prep' routines.
Currently the only flag passed is DMA_PREP_INTERRUPT.  The expectation is
that arch-specific async_tx_find_channel() implementations can exploit this
capability to find the best channel for an operation.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
2008-02-06 10:12:18 -07:00
Dan Williams 0036731c88 async_tx: kill tx_set_src and tx_set_dest methods
The tx_set_src and tx_set_dest methods were originally implemented to allow
an array of addresses to be passed down from async_xor to the dmaengine
driver while minimizing stack overhead.  Removing these methods allows
drivers to have all transaction parameters available at 'prep' time, saves
two function pointers in struct dma_async_tx_descriptor, and reduces the
number of indirect branches..

A consequence of moving this data to the 'prep' routine is that
multi-source routines like async_xor need temporary storage to convert an
array of linear addresses into an array of dma addresses.  In order to keep
the same stack footprint of the previous implementation the input array is
reused as storage for the dma addresses.  This requires that
sizeof(dma_addr_t) be less than or equal to sizeof(void *).  As a
consequence CONFIG_DMADEVICES now depends on !CONFIG_HIGHMEM64G.  It also
requires that drivers be able to make descriptor resources available when
the 'prep' routine is polled.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Shannon Nelson <shannon.nelson@intel.com>
2008-02-06 10:12:17 -07:00
Dan Williams d909b34759 async_tx: kill ASYNC_TX_ASSUME_COHERENT
Remove the unused ASYNC_TX_ASSUME_COHERENT flag.  Async_tx is
meant to hide the difference between asynchronous hardware and synchronous
software operations, this flag requires clients to understand cache
coherency consequences of the async path.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
2008-02-06 10:12:17 -07:00
Denis Cheng cf8f68aa76 async_tx: use LIST_HEAD instead of LIST_HEAD_INIT
single list_head variable initialized with LIST_HEAD_INIT could almost
always can be replaced with LIST_HEAD declaration, this shrinks the code
and looks better.

Signed-off-by: Denis Cheng <crquan@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-02-06 10:12:17 -07:00
Dan Williams 1367a3d310 async_tx: fix compile breakage, mark do_async_xor __always_inline
do_async_xor must be compiled away on !HAS_DMA archs.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2008-02-06 10:12:17 -07:00
Linus Torvalds eba0e319c1 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (125 commits)
  [CRYPTO] twofish: Merge common glue code
  [CRYPTO] hifn_795x: Fixup container_of() usage
  [CRYPTO] cast6: inline bloat--
  [CRYPTO] api: Set default CRYPTO_MINALIGN to unsigned long long
  [CRYPTO] tcrypt: Make xcbc available as a standalone test
  [CRYPTO] xcbc: Remove bogus hash/cipher test
  [CRYPTO] xcbc: Fix algorithm leak when block size check fails
  [CRYPTO] tcrypt: Zero axbuf in the right function
  [CRYPTO] padlock: Only reset the key once for each CBC and ECB operation
  [CRYPTO] api: Include sched.h for cond_resched in scatterwalk.h
  [CRYPTO] salsa20-asm: Remove unnecessary dependency on CRYPTO_SALSA20
  [CRYPTO] tcrypt: Add select of AEAD
  [CRYPTO] salsa20: Add x86-64 assembly version
  [CRYPTO] salsa20_i586: Salsa20 stream cipher algorithm (i586 version)
  [CRYPTO] gcm: Introduce rfc4106
  [CRYPTO] api: Show async type
  [CRYPTO] chainiv: Avoid lock spinning where possible
  [CRYPTO] seqiv: Add select AEAD in Kconfig
  [CRYPTO] scatterwalk: Handle zero nbytes in scatterwalk_map_and_copy
  [CRYPTO] null: Allow setkey on digest_null 
  ...
2008-01-25 08:38:25 -08:00
Ilpo Järvinen e6ccc727f3 [CRYPTO] cast6: inline bloat--
Bloat-o-meter shows rather high readings for cast6...

crypto/cast6.c:
  cast6_setkey  | -1310
  cast6_encrypt | -4567
  cast6_decrypt | -4561
 3 functions changed, 10438 bytes removed, diff: -10438

crypto/cast6.c:
  W    | +659
  Q    | +308
  QBAR | +316
 3 functions changed, 1283 bytes added, diff: +1283

crypto/cast6.o:
 6 functions changed, 1283 bytes added, 10438 bytes removed, diff: -9155

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:17:02 +11:00
Herbert Xu 38ed9ab23b [CRYPTO] tcrypt: Make xcbc available as a standalone test
Currently the gcm(aes) tests have to be taken together with all other
algorithms.  This patch makes it available by itself at number 106.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:17:01 +11:00
Herbert Xu 94765b9e4c [CRYPTO] xcbc: Remove bogus hash/cipher test
When setting the digest size xcbc tests to see if the underlying algorithm
is a hash.  This is silly because we don't allow it to be a hash and we've
specifically requested for a cipher.

This patch removes the bogus test.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:17:00 +11:00
Herbert Xu 1b87887d6c [CRYPTO] xcbc: Fix algorithm leak when block size check fails
When the underlying algorithm has a block size other than 16 we abort
without freeing it.  In fact, we try to return the algorithm itself
as an error!

This patch plugs the leak and makes it return -EINVAL instead.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:17:00 +11:00
Herbert Xu 2a999a3abb [CRYPTO] tcrypt: Zero axbuf in the right function
The axbuf buffer is used by test_aead and therefore should be zeroed
there instead of in test_hash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:59 +11:00
Tan Swee Heng 214dc54f6f [CRYPTO] salsa20-asm: Remove unnecessary dependency on CRYPTO_SALSA20
Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:58 +11:00
Sebastian Siewior d1cda4e396 [CRYPTO] tcrypt: Add select of AEAD
ERROR: "crypto_aead_setauthsize" [crypto/tcrypt.ko] undefined!
 ERROR: "crypto_alloc_aead" [crypto/tcrypt.ko] undefined!

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:58 +11:00
Tan Swee Heng 9a7dafbba4 [CRYPTO] salsa20: Add x86-64 assembly version
This is the x86-64 version of the Salsa20 stream cipher algorithm. The
original assembly code came from
<http://cr.yp.to/snuffle/salsa20/amd64-3/salsa20.s>. It has been
reformatted for clarity.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:57 +11:00
Tan Swee Heng 974e4b752e [CRYPTO] salsa20_i586: Salsa20 stream cipher algorithm (i586 version)
This patch contains the salsa20-i586 implementation. The original
assembly code came from
<http://cr.yp.to/snuffle/salsa20/x86-pm/salsa20.s>. I have reformatted
it (added indents) so that it matches the other algorithms in
arch/x86/crypto.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:57 +11:00
Herbert Xu dadbc53d0b [CRYPTO] gcm: Introduce rfc4106
This patch introduces the rfc4106 wrapper for GCM just as we have an
rfc4309 wrapper for CCM.  The purpose of the wrapper is to include part
of the IV in the key so that it can be negotiated by IPsec.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:56 +11:00
Herbert Xu 189ed66e95 [CRYPTO] api: Show async type
This patch adds an async field to /proc/crypto for ablkcipher and aead
algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:56 +11:00
Herbert Xu e7cd2514ea [CRYPTO] chainiv: Avoid lock spinning where possible
This patch makes chainiv avoid spinning by postponing requests on lock
contention if the user allows the use of asynchronous algorithms.  If
a synchronous algorithm is requested then we behave as before.

This should improve IPsec performance on SMP when two CPUs attempt to
transmit over the same SA.  Currently one of them will spin doing nothing
waiting for the other CPU to finish its encryption.  This patch makes it
postpone the request and get on with other work.

If only one CPU is transmitting for a given SA, then we will process
the request synchronously as before.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:55 +11:00
Herbert Xu 4726204200 [CRYPTO] seqiv: Add select AEAD in Kconfig
Now that seqiv supports AEAD algorithms it needs to select the AEAD option.

Thanks to Erez Zadok for pointing out the problem.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:55 +11:00
Herbert Xu 6e050778c5 [CRYPTO] scatterwalk: Handle zero nbytes in scatterwalk_map_and_copy
It's better to return silently than crash and burn when someone feeds us
a zero length.  In particular the null digest algorithm when used as part
of authenc will do that to us.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:54 +11:00
Herbert Xu ce5bd4aca3 [CRYPTO] null: Allow setkey on digest_null
We need to allow setkey on digest_null if it is to be used directly by
authenc instead of through hmac.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:54 +11:00
Herbert Xu 3631c650c4 [CRYPTO] null: Add null blkcipher algorithm
This patch adds a null blkcipher algorithm called ecb(cipher_null) for
backwards compatibility.  Previously the null algorithm when used by
IPsec copied the data byte by byte.  This new algorithm optimises that
to a straight memcpy which lets us better measure inherent overheads in
our IPsec code.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:53 +11:00
Joy Latten 93cc74e078 [CRYPTO] tcrypt: Add CCM vectors
This patch adds 7 test vectors to tcrypt for CCM.
The test vectors are from rfc 3610.
There are about 10 more test vectors in RFC 3610
and 4 or 5 more in NIST. I can add these as time permits.

I also needed to set authsize. CCM has a prerequisite of
authsize. 

Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:53 +11:00
Joy Latten 4a49b499df [CRYPTO] ccm: Added CCM mode
This patch adds Counter with CBC-MAC (CCM) support.
RFC 3610 and NIST Special Publication 800-38C were referenced.

Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:53 +11:00
Herbert Xu d29ce988ae [CRYPTO] aead: Create default givcipher instances
This patch makes crypto_alloc_aead always return algorithms that is
capable of generating their own IVs through givencrypt and givdecrypt.
All existing AEAD algorithms already do.  New ones must either supply
their own or specify a generic IV generator with the geniv field.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:52 +11:00
Herbert Xu 14df4d8043 [CRYPTO] seqiv: Add AEAD support
This patch adds support for using seqiv with AEAD algorithms.  This is
useful for those AEAD algorithms that performs authentication before
encryption because the IV generated by the underlying encryption algorithm
won't be available for authentication.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:52 +11:00
Herbert Xu 5b6d2d7fdf [CRYPTO] aead: Add aead_geniv_alloc/aead_geniv_free
This patch creates the infrastructure to help the construction of IV
generator templates that wrap around AEAD algorithms by adding an IV
generator to them.  This is useful for AEAD algorithms with no built-in
IV generator or to replace their built-in generator.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:51 +11:00
Herbert Xu aedb30dc49 [CRYPTO] aead: Allow algorithms with no givcrypt support
Some algorithms always require manual IV construction.  For instance,
the generic CCM algorithm requires the first byte of the IV to be manually
constructed.  Such algorithms are always used by other algorithms equipped
with their own IV generators and do not need IV generation per se.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:50 +11:00
Herbert Xu e56dd56418 [CRYPTO] authenc: Add givencrypt operation
This patch implements the givencrypt function for authenc.  It simply
calls the givencrypt operation on the underlying cipher instead of encrypt.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:50 +11:00
Herbert Xu 743edf5727 [CRYPTO] aead: Add givcrypt operations
This patch adds the underlying givcrypt operations for aead and associated
support elements.  The rationale is identical to that of the skcipher
givcrypt operations, i.e., sometimes only the algorithm knows how the
IV should be generated.

A new request type aead_givcrypt_request is added which contains an
embedded aead_request structure with two new elements to support this
operation.  The new elements are seq and giv.  The seq field should
contain a strictly increasing 64-bit integer which may be used by
certain IV generators as an input value.  The giv field will be used
to store the generated IV.  It does not need to obey the alignment
requirements of the algorithm because it's not used during the operation.

The existing iv field must still be available as it will be used to store
intermediate IVs and the output IV if chaining is desired.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:49 +11:00
Herbert Xu 0a270321db [CRYPTO] seqiv: Add Sequence Number IV Generator
This generator generates an IV based on a sequence number by xoring it
with a salt.  This algorithm is mainly useful for CTR and similar modes.

This patch also sets it as the default IV generator for ctr.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:48 +11:00
Herbert Xu 1472e5ebaa [CRYPTO] gcm: Use crypto_grab_skcipher
This patch converts the gcm algorithm over to crypto_grab_skcipher
which is a prerequisite for IV generation.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:47 +11:00
Herbert Xu d00aa19b50 [CRYPTO] gcm: Allow block cipher parameter
This patch adds the gcm_base template which takes a block cipher
parameter instead of cipher.  This allows the user to specify a
specific CTR implementation.

This also fixes a leak of the cipher algorithm that was previously
looked up but never freed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:47 +11:00
Herbert Xu 9ffde35a8e [CRYPTO] authenc: Use crypto_grab_skcipher
This patch converts the authenc algorithm over to crypto_grab_skcipher
which is a prerequisite for IV generation.

This patch also changes authenc to set its ASYNC status depending on
the ASYNC status of the underlying skcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:46 +11:00
Herbert Xu b9c55aa475 [CRYPTO] skcipher: Create default givcipher instances
This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
return algorithms that are capable of generating their own IVs through
givencrypt and givdecrypt.  Each algorithm may specify its default IV
generator through the geniv field.

For algorithms that do not set the geniv field, the blkcipher layer will
pick a default.  Currently it's chainiv for synchronous algorithms and
eseqiv for asynchronous algorithms.  Note that if these wrappers do not
work on an algorithm then that algorithm must specify its own geniv or
it can't be used at all.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:46 +11:00
Herbert Xu 806d183aa6 [CRYPTO] eseqiv: Add Encrypted Sequence Number IV Generator
This generator generates an IV based on a sequence number by xoring it
with a salt and then encrypting it with the same key as used to encrypt
the plain text.  This algorithm requires that the block size be equal
to the IV size.  It is mainly useful for CBC.

It has one noteworthy property that for IPsec the IV happens to lie
just before the plain text so the IV generation simply increases the
number of encrypted blocks by one.  Therefore the cost of this generator
is entirely dependent on the speed of the underlying cipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:45 +11:00
Herbert Xu 7f47073911 [CRYPTO] chainiv: Add chain IV generator
The chain IV generator is the one we've been using in the IPsec stack.
It simply starts out with a random IV, then uses the last block of each
encrypted packet's cipher text as the IV for the next packet.

It can only be used by synchronous ciphers since we have to make sure
that we don't start the encryption of the next packet until the last
one has completed.

It does have the advantage of using very little CPU time since it doesn't
have to generate anything at all.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:44 +11:00
Herbert Xu ecfc43292f [CRYPTO] skcipher: Add skcipher_geniv_alloc/skcipher_geniv_free
This patch creates the infrastructure to help the construction of givcipher
templates that wrap around existing blkcipher/ablkcipher algorithms by adding
an IV generator to them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:44 +11:00
Herbert Xu 927eead52c [CRYPTO] cryptd: Use geniv of the underlying algorithm
If the underlying algorithm specifies a specific geniv algorithm then
we should use it for the cryptd version as well.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:43 +11:00
Herbert Xu 23508e11ab [CRYPTO] skcipher: Added geniv field
This patch introduces the geniv field which indicates the default IV
generator for each algorithm.  It should point to a string that is not
freed as long as the algorithm is registered.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:43 +11:00
Herbert Xu 61da88e2b8 [CRYPTO] skcipher: Add givcrypt operations and givcipher type
Different block cipher modes have different requirements for intialisation
vectors.  For example, CBC can use a simple randomly generated IV while
modes such as CTR must use an IV generation mechanisms that give a stronger
guarantee on the lack of collisions.  Furthermore, disk encryption modes
have their own IV generation algorithms.

Up until now IV generation has been left to the users of the symmetric
key cipher API.  This is inconvenient as the number of block cipher modes
increase because the user needs to be aware of which mode is supposed to
be paired with which IV generation algorithm.

Therefore it makes sense to integrate the IV generation into the crypto
API.  This patch takes the first step in that direction by creating two
new ablkcipher operations, givencrypt and givdecrypt that generates an
IV before performing the actual encryption or decryption.

The operations are currently not exposed to the user.  That will be done
once the underlying functionality has actually been implemented.

It also creates the underlying givcipher type.  Algorithms that directly
generate IVs would use it instead of ablkcipher.  All other algorithms
(including all existing ones) would generate a givcipher algorithm upon
registration.  This givcipher algorithm will be constructed from the geniv
string that's stored in every algorithm.  That string will locate a template
which is instantiated by the blkcipher/ablkcipher algorithm in question to
give a givcipher algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:43 +11:00
Herbert Xu 378f4f51f9 [CRYPTO] skcipher: Add crypto_grab_skcipher interface
Note: From now on the collective of ablkcipher/blkcipher/givcipher will
be known as skcipher, i.e., symmetric key cipher.  The name blkcipher has
always been much of a misnomer since it supports stream ciphers too.

This patch adds the function crypto_grab_skcipher as a new way of getting
an ablkcipher spawn.  The problem is that previously we did this in two
steps, first getting the algorithm and then calling crypto_init_spawn.

This meant that each spawn user had to be aware of what type and mask to
use for these two steps.  This is difficult and also presents a problem
when the type/mask changes as they're about to be for IV generators.

The new interface does both steps together just like crypto_alloc_ablkcipher.

As a side-effect this also allows us to be stronger on type enforcement
for spawns.  For now this is only done for ablkcipher but it's trivial
to extend for other types.

This patch also moves the type/mask logic for skcipher into the helpers
crypto_skcipher_type and crypto_skcipher_mask.

Finally this patch introduces the function crypto_require_sync to determine
whether the user is specifically requesting a sync algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:42 +11:00
Herbert Xu 84c9115230 [CRYPTO] gcm: Add support for async ciphers
This patch adds the necessary changes for GCM to be used with async
ciphers.  This would allow it to be used with hardware devices that
support CTR.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:42 +11:00
Herbert Xu 5311f248b7 [CRYPTO] ctr: Refactor into ctr and rfc3686
As discussed previously, this patch moves the basic CTR functionality
into a chainable algorithm called ctr.  The IPsec-specific variant of
it is now placed on top with the name rfc3686.

So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec
variant will be called rfc3686(ctr(aes)).  This patch also adjusts
gcm accordingly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:41 +11:00
Herbert Xu 653ebd9c85 [CRYPTO] blkcipher: Merge ablkcipher and blkcipher into one option/module
With the impending addition of the givcipher type, both blkcipher and
ablkcipher algorithms will use it to create givcipher objects.  As such
it no longer makes sense to split the system between ablkcipher and
blkcipher.  In particular, both ablkcipher.c and blkcipher.c would need
to use the givcipher type which has to reside in ablkcipher.c since it
shares much code with it.

This patch merges the two Kconfig options as well as the modules into one.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:41 +11:00
Herbert Xu 2589469d7b [CRYPTO] gcm: Fix request context alignment
This patch fixes the request context alignment so that it is actually
aligned to the value required by the algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:40 +11:00
Herbert Xu 68b6c7d691 [CRYPTO] api: Add crypto_attr_alg_name
This patch adds a new helper crypto_attr_alg_name which is basically the
first half of crypto_attr_alg.  That is, it returns an algorithm name
parameter as a string without looking it up.  The caller can then look it
up immediately or defer it until later.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:40 +11:00
Borislav Petkov 5e553110f2 [CRYPTO] authenc: Select HASH in Kconfig
i get here:

----
  LD      vmlinux
  SYSMAP  System.map
  SYSMAP  .tmp_System.map
  Building modules, stage 2.
  MODPOST 226 modules
ERROR: "crypto_hash_type" [crypto/authenc.ko] undefined!
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
---

which fails because crypto_hash_type is declared in crypto/hash.c. You might wanna
fix it like so:

Signed-off-by: Borislav Petkov <bbpetkov@yahoo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:39 +11:00
Herbert Xu 7c3d703fa8 [CRYPTO] authenc: Merge common hashing code
This patch merges the common hashing code between encryption and decryption.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:38 +11:00
Herbert Xu 12dc5e62b4 [CRYPTO] authenc: Use RTA_OK to check length
This patch changes setkey to use RTA_OK to check the validity of the
setkey request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:38 +11:00
Herbert Xu c2c61f513d [CRYPTO] authenc: Fix typo in ivsize
The ivsize should be fetched from ablkcipher, not blkcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:37 +11:00
Tan Swee Heng 5de8f1b562 [CRYPTO] tcrypt: Added salsa20 speed test
This patch adds a simple speed test for salsa20.
Usage: modprobe tcrypt mode=206

Signed-of-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:36 +11:00
Zoltan Sogor 0b77abb3b2 [CRYPTO] lzo: Add LZO compression algorithm support
Add LZO compression algorithm support

Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:35 +11:00
Zoltan Sogor 91755a921c [CRYPTO] tcrypt: Add common compression tester function
Add common compression tester function
Modify deflate test case to use the common compressor test function

Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:34 +11:00
Tan Swee Heng 8bff664cdf [CRYPTO] tcrypt: Salsa20 large test vector
This is a large test vector for Salsa20 that crosses the 4096-bytes
page boundary.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:34 +11:00
Tan Swee Heng eb6f13eb9f [CRYPTO] salsa20_generic: Fix multi-page processing
This patch fixes the multi-page processing bug that affects large test
vectors (the same bug that previously affected ctr.c).

There is an optimization for the case walk.nbytes == nbytes. Also we
now use crypto_xor() instead of adhoc XOR routines.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:34 +11:00
Herbert Xu 7f6813786a [CRYPTO] gcm: Put abreq in private context instead of on stack
The abreq structure is currently allocated on the stack.  This is broken
if the underlying algorithm is asynchronous.  This patch changes it so
that it's taken from the private context instead which has been enlarged
accordingly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:33 +11:00
Herbert Xu b2ab4a57b0 [CRYPTO] scatterwalk: Restore custom sg chaining for now
Unfortunately the generic chaining hasn't been ported to all architectures
yet, and notably not s390.  So this patch restores the chainging that we've
been using previously which does work everywhere.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:33 +11:00
Herbert Xu 42c271c6c5 [CRYPTO] scatterwalk: Move scatterwalk.h to linux/crypto
The scatterwalk infrastructure is used by algorithms so it needs to
move out of crypto for future users that may live in drivers/crypto
or asm/*/crypto.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:32 +11:00
Herbert Xu fe70f5dfe1 [CRYPTO] aead: Return EBADMSG for ICV mismatch
This patch changes gcm/authenc to return EBADMSG instead of EINVAL for
ICV mismatches.  This convention has already been adopted by IPsec.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:32 +11:00
Herbert Xu 6160b28992 [CRYPTO] gcm: Fix ICV handling
The crypto_aead convention for ICVs is to include it directly in the
output.  If we decided to change this in future then we would make
the ICV (if the algorithm has an explicit one) available in the
request itself.

For now no algorithm needs this so this patch changes gcm to conform
to this convention.  It also adjusts the tcrypt aead tests to take
this into account.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:31 +11:00
Herbert Xu 8df213d9b5 [CRYPTO] tcrypt: Make gcm available as a standalone test
Currently the gcm(aes) tests have to be taken together with all other
ciphers.  This patch makes it available by itself at number 35.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:30 +11:00
Herbert Xu 481f34ae75 [CRYPTO] authenc: Fix hash verification
The previous code incorrectly included the hash in the verification which
also meant that we'd crash and burn when it comes to actually verifying
the hash since we'd go past the end of the SG list.

This patch fixes that by subtracting authsize from cryptlen at the start.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:30 +11:00
Herbert Xu e236d4a89a [CRYPTO] authenc: Move enckeylen into key itself
Having enckeylen as a template parameter makes it a pain for hardware
devices that implement ciphers with many key sizes since each one would
have to be registered separately.

Since the authenc algorithm is mainly used for legacy purposes where its
key is going to be constructed out of two separate keys, we can in fact
embed this value into the key itself.

This patch does this by prepending an rtnetlink header to the key that
contains the encryption key length.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:30 +11:00
Herbert Xu 7ba683a6de [CRYPTO] aead: Make authsize a run-time parameter
As it is authsize is an algorithm paramter which cannot be changed at
run-time.  This is inconvenient because hardware that implements such
algorithms would have to register each authsize that they support
separately.

Since authsize is a property common to all AEAD algorithms, we can add
a function setauthsize that sets it at run-time, just like setkey.

This patch does exactly that and also changes authenc so that authsize
is no longer a parameter of its template.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:29 +11:00
Herbert Xu e29bc6ad0e [CRYPTO] authenc: Use or instead of max on alignment masks
Since alignment masks are always one less than a power of two, we can
use binary or to find their maximum.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:28 +11:00
Denis Cheng a10e11946b [CRYPTO] tcrypt: Use print_hex_dump from linux/kernel.h
These utilities implemented in lib/hexdump.c are more handy, please use this.

Signed-off-by: Denis Cheng <crquan@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:27 +11:00
Jan Glauber 9617d6ef62 [CRYPTO] tcrypt: AES CBC test vectors from NIST SP800-38A
Add test vectors to tcrypt for AES in CBC mode for key sizes 192 and 256.
The test vectors are copied from NIST SP800-38A.

Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:26 +11:00
Tan Swee Heng a773edb3ed [CRYPTO] tcrypt: AES CTR large test vector
This patch adds a large AES CTR mode test vector. The test vector is
4100 bytes in size. It was generated using a C++ program that called
Crypto++.

Note that this patch increases considerably the size of "struct
cipher_testvec" and hence the size of tcrypt.ko.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:25 +11:00
Tan Swee Heng 6d1a69d53a [CRYPTO] tcrypt: Support for large test vectors
Currently the number of entries in a cipher test vector template is
limited by TVMEMSIZE/sizeof(struct cipher_testvec). This patch
circumvents the problem by pointing cipher_tv to each entry in the
template, rather than the template itself.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:25 +11:00
Herbert Xu 0971eb0de9 [CRYPTO] ctr: Fix multi-page processing
When the data spans across a page boundary, CTR may incorrectly process
a partial block in the middle because the blkcipher walking code may
supply partial blocks in the middle as long as the total length of the
supplied data is more than a block.  CTR is supposed to return any unused
partial block in that case to the walker.

This patch fixes this by doing exactly that, returning partial blocks to
the walker unless we received less than a block-worth of data to start
with.

This also allows us to optimise the bulk of the processing since we no
longer have to worry about partial blocks until the very end.

Thanks to Tan Swee Heng for fixes and actually testing this :)

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:24 +11:00
Mikko Herranen 28db8e3e38 [CRYPTO] gcm: New algorithm
Add GCM/GMAC support to cryptoapi.

GCM (Galois/Counter Mode) is an AEAD mode of operations for any block cipher
with a block size of 16.  The typical example is AES-GCM.

Signed-off-by: Mikko Herranen <mh1@iki.fi>
Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:23 +11:00
Mikko Herranen e3a4ea4fd2 [CRYPTO] tcrypt: Add aead support
Add AEAD support to tcrypt, needed by GCM.

Signed-off-by: Mikko Herranen <mh1@iki.fi>
Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:23 +11:00
Denys Vlasenko ff85a8082f [CRYPTO] camellia: Move more common code into camellia_setup_tail
Analogously to camellia7 patch, move
"absorb kw2 to other subkeys" and "absorb kw4 to other subkeys"
code parts into camellia_setup_tail(). This further reduces
source and object code size at the cost of two brances
in key setup code.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:22 +11:00
Denys Vlasenko dedcf8b064 [CRYPTO] camellia: Move common code into camellia_setup_tail
Move "key XOR is end of F-function" code part into
camellia_setup_tail(), it is sufficiently similar
between camellia_setup128 and camellia_setup256.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:21 +11:00
Denys Vlasenko acca79a664 [CRYPTO] camellia: Merge encrypt/decrypt routines for all key lengths
unifies encrypt/decrypt routines for different key lengths.
This reduces module size by ~25%, with tiny (less than 1%)
speed impact.
Also collapses encrypt/decrypt into more readable
(visually shorter) form using macros.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:21 +11:00
Denys Vlasenko 2ddae4a644 [CRYPTO] camellia: Code shrink
Remove unused macro params.
Use (u8)(expr) instead of (expr) & 0xff,
helps gcc to realize how to use simpler commands.
Move CAMELLIA_FLS macro closer to encrypt/decrypt routines.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:20 +11:00
Herbert Xu 3f8214ea33 [CRYPTO] ctr: Use crypto_inc and crypto_xor
This patch replaces the custom inc/xor in CTR with the generic functions.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:20 +11:00
Herbert Xu d0b9007a27 [CRYPTO] pcbc: Use crypto_xor
This patch replaces the custom xor in CBC with the generic crypto_xor.

It changes the operations for in-place encryption slightly to avoid
calling crypto_xor with tmpbuf since it is not necessarily aligned.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:19 +11:00
Herbert Xu 50b6544e13 [CRYPTO] cbc: Require block size to be a power of 2
All common block ciphers have a block size that's a power of 2.  In fact,
all of our block ciphers obey this rule.

If we require this then CBC can be optimised to avoid an expensive divide
on in-place decryption.

I've also changed the saving of the first IV in the in-place decryption
case to the last IV because that lets us use walk->iv (which is already
aligned) for the xor operation where alignment is required.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:19 +11:00
Herbert Xu 3c7f076da5 [CRYPTO] cbc: Use crypto_xor
This patch replaces the custom xor in CBC with the generic crypto_xor.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:18 +11:00
Herbert Xu 7613636def [CRYPTO] api: Add crypto_inc and crypto_xor
With the addition of more stream ciphers we need to curb the proliferation
of ad-hoc xor functions.  This patch creates a generic pair of functions,
crypto_inc and crypto_xor which does big-endian increment and exclusive or,
respectively.

For optimum performance, they both use u32 operations so alignment must be
as that of u32 even though the arguments are of type u8 *.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:17 +11:00
Tan Swee Heng 2407d60872 [CRYPTO] salsa20: Salsa20 stream cipher
This patch implements the Salsa20 stream cipher using the blkcipher interface.

The core cipher code comes from Daniel Bernstein's submission to eSTREAM:
  http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/ref/

The test vectors comes from:
  http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/

It has been tested successfully with "modprobe tcrypt mode=34" on an
UML instance.

Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:15 +11:00
Herbert Xu 332f8840f7 [CRYPTO] ablkcipher: Add distinct ABLKCIPHER type
Up until now we have ablkcipher algorithms have been identified as
type BLKCIPHER with the ASYNC bit set.  This is suboptimal because
ablkcipher refers to two things.  On the one hand it refers to the
top-level ablkcipher interface with requests.  On the other hand it
refers to and algorithm type underneath.

As it is you cannot request a synchronous block cipher algorithm
with the ablkcipher interface on top.  This is a problem because
we want to be able to eventually phase out the blkcipher top-level
interface.

This patch fixes this by making ABLKCIPHER its own type, just as
we have distinct types for HASH and DIGEST.  The type it associated
with the algorithm implementation only.

Which top-level interface is used for synchronous block ciphers is
then determined by the mask that's used.  If it's a specific mask
then the old blkcipher interface is given, otherwise we go with the
new ablkcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:15 +11:00
Herbert Xu 468577abe3 [CRYPTO] scatterwalk: Use generic scatterlist chaining
This patch converts the crypto scatterwalk code to use the generic
scatterlist chaining rather the version specific to crypto.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:14 +11:00
Jonathan Lynch cd12fb906d [CRYPTO] sha256-generic: Extend sha256_generic.c to support SHA-224
Resubmitting this patch which extends sha256_generic.c to support SHA-224 as
described in FIPS 180-2 and RFC 3874. HMAC-SHA-224 as described in RFC4231
is then supported through the hmac interface.

Patch includes test vectors for SHA-224 and HMAC-SHA-224.

SHA-224 chould be chosen as a hash algorithm when 112 bits of security
strength is required.

Patch generated against the 2.6.24-rc1 kernel and tested against
2.6.24-rc1-git14 which includes fix for scatter gather implementation for HMAC.

Signed-off-by: Jonathan Lynch <jonathan.lynch@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:12 +11:00
Sebastian Siewior 5157dea813 [CRYPTO] aes-i586: Remove setkey
The setkey() function can be shared with the generic algorithm.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:10 +11:00
Sebastian Siewior b345cee90a [CRYPTO] ctr: Remove default M
NO other block mode is M by default.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:10 +11:00
Sebastian Siewior 81190b3215 [CRYPTO] aes-x86-64: Remove setkey
The setkey() function can be shared with the generic algorithm.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:10 +11:00
Sebastian Siewior 96e82e4551 [CRYPTO] aes-generic: Make key generation exportable
This patch exports four tables and the set_key() routine. This ressources
can be shared by other AES implementations (aes-x86_64 for instance).
The decryption key has been turned around (deckey[0] is the first piece
of the key instead of deckey[keylen+20]). The encrypt/decrypt functions
are looking now identical (except they are using different tables and
key).

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:09 +11:00
Sebastian Siewior be5fb27012 [CRYPTO] aes-generic: Coding style cleanup
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:09 +11:00
Joy Latten 41fdab3dd3 [CRYPTO] ctr: Add countersize
This patch adds countersize to CTR mode.
The template is now ctr(algo,noncesize,ivsize,countersize).

For example, ctr(aes,4,8,4) indicates the counterblock
will be composed of a salt/nonce that is 4 bytes, an iv
that is 8 bytes and the counter is 4 bytes.

When noncesize + ivsize < blocksize, CTR initializes the
last block - ivsize - noncesize portion of the block to
zero.  Otherwise the counter block is composed of the IV
(and nonce if necessary).

If noncesize + ivsize == blocksize, then this indicates that
user is passing in entire counterblock. Thus countersize
indicates the amount of bytes in counterblock to use as
the counter for incrementing. CTR will increment counter
portion by 1, and begin encryption with that value.

Note that CTR assumes the counter portion of the block that
will be incremented is stored in big endian.

Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:08 +11:00
Denys Vlasenko d3e7480572 [CRYPTO] camellia: De-unrolling
Move huge unrolled pieces of code (3 screenfuls) at the end of
128/256 key setup routines into common camellia_setup_tail(),
convert it to loop there.
Loop is still unrolled six times, so performance hit is very small,
code size win is big.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:08 +11:00
Denys Vlasenko 1ce73e8d6d [CRYPTO] camellia: Code cleanup
Optimize GETU32 to use 4-byte memcpy (modern gcc will convert
such memcpy to single move instruction on i386).
Original GETU32 did four byte fetches, and shifted/XORed those.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:07 +11:00
Denys Vlasenko 3a5e5f8108 [CRYPTO] camellia: Code cleanup
Rename some macros to shorter names: CAMELLIA_RR8 -> ROR8,
making it easier to understand that it is just a right rotation,
nothing camellia-specific in it.
CAMELLIA_SUBKEY_L() -> SUBKEY_L() - just shorter.

Move be32 <-> cpu conversions out of en/decrypt128/256 and into
camellia_en/decrypt - no reason to have that code duplicated twice.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:07 +11:00
Denys Vlasenko 1721a81256 [CRYPTO] camellia: Code cleanup
Move code blocks around so that related pieces are closer together:
e.g. CAMELLIA_ROUNDSM macro does not need to be separated
from the rest of the code by huge array of constants.

Remove unused macros (COPY4WORD, SWAP4WORD, XOR4WORD[2])

Drop SUBL(), SUBR() macros which only obscure things.
Same for CAMELLIA_SP1110() macro and KEY_TABLE_TYPE typedef.

Remove useless comments:
/* encryption */ -- well it's obvious enough already!
void camellia_encrypt128(...)

Combine swap with copying at the beginning/end of encrypt/decrypt.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Acked-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:06 +11:00
Denys Vlasenko e2b21b5002 [CRYPTO] twofish: Do not unroll big stuff in twofish key setup
Currently twofish cipher key setup code
has unrolled loops - approximately 70-100
instructions are repeated 40 times.

As a result, twofish module is the biggest module
in crypto/*.

Unrolling produces x2.5 more code (+18k on i386), and speeds up key
setup by 7%:

	unrolled: twofish_setkey/sec: 41128
	    loop: twofish_setkey/sec: 38148
	CALC_K256: ~100 insns each
	CALC_K192: ~90 insns
	   CALC_K: ~70 insns

Attached patch removes this unrolling.

$ size */twofish_common.o
   text    data     bss     dec     hex filename
  37920       0       0   37920    9420 crypto.org/twofish_common.o
  13209       0       0   13209    3399 crypto/twofish_common.o

Run tested (modprobe tcrypt reports ok). Please apply.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:06 +11:00
Sebastian Siewior 89e1265431 [CRYPTO] aes: Move common defines into a header file
This three defines are used in all AES related hardware.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:04 +11:00
Evgeniy Polyakov c3041f9c93 [CRYPTO] hifn_795x: Detect weak keys
HIFN driver update to use DES weak key checks (exported in this patch).

Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:03 +11:00
Evgeniy Polyakov 16d004a2ed [CRYPTO] des: Create header file for common macros
This patch creates include/crypto/des.h for common macros shared between
DES implementations.

Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:02 +11:00
Joy Latten 23e353c8a6 [CRYPTO] ctr: Add CTR (Counter) block cipher mode
This patch implements CTR mode for IPsec.
It is based off of RFC 3686.

Please note:
1. CTR turns a block cipher into a stream cipher.
Encryption is done in blocks, however the last block
may be a partial block.

A "counter block" is encrypted, creating a keystream
that is xor'ed with the plaintext. The counter portion
of the counter block is incremented after each block
of plaintext is encrypted.
Decryption is performed in same manner.

2. The CTR counterblock is composed of,
        nonce + IV + counter

The size of the counterblock is equivalent to the
blocksize of the cipher.
        sizeof(nonce) + sizeof(IV) + sizeof(counter) = blocksize

The CTR template requires the name of the cipher
algorithm, the sizeof the nonce, and the sizeof the iv.
        ctr(cipher,sizeof_nonce,sizeof_iv)

So for example,
        ctr(aes,4,8)
specifies the counterblock will be composed of 4 bytes
from a nonce, 8 bytes from the iv, and 4 bytes for counter
since aes has a blocksize of 16 bytes.

3. The counter portion of the counter block is stored
in big endian for conformance to rfc 3686.

Signed-off-by: Joy Latten <latten@austin.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11 08:16:01 +11:00
Al Viro 3c50b3683a fcrypt endianness misannotations
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-12-05 09:25:20 -08:00
Herbert Xu 38cb2419f5 [CRYPTO] api: Fix potential race in crypto_remove_spawn
As it is crypto_remove_spawn may try to unregister an instance which is
yet to be registered.  This patch fixes this by checking whether the
instance has been registered before attempting to remove it.

It also removes a bogus cra_destroy check in crypto_register_instance as
1) it's outside the mutex;
2) we have a check in __crypto_register_alg already.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-11-23 19:32:09 +08:00
Herbert Xu f347c4facf [CRYPTO] authenc: Move initialisations up to shut up gcc
It seems that newer versions of gcc have regressed in their abilities to
analyse initialisations.  This patch moves the initialisations up to avoid
the warnings.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-11-23 19:32:09 +08:00
Adrian Bunk 87ae9afdca cleanup asm/scatterlist.h includes
Not architecture specific code should not #include <asm/scatterlist.h>.

This patch therefore either replaces them with
#include <linux/scatterlist.h> or simply removes them if they were
unused.

Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-11-02 08:47:06 +01:00
Herbert Xu a5a613a429 [CRYPTO] tcrypt: Move sg_init_table out of timing loops
This patch moves the sg_init_table out of the timing loops for hash
algorithms so that it doesn't impact on the speed test results.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-27 00:51:21 -07:00
David S. Miller b733588559 [CRYPTO]: Initialize TCRYPT on-stack scatterlist objects correctly.
Use sg_init_one() and sg_init_table() as needed.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-26 00:38:10 -07:00
David S. Miller a6767721a5 [CRYPTO]: HMAC needs some more scatterlist fixups.
hmac_setkey(), hmac_init(), and hmac_final() have
a singular on-stack scatterlist.  Initialit is
using sg_init_one() instead of using sg_set_buf().

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-26 00:37:12 -07:00
Vlad Yasevich 41fb285430 [CRYPTO]: Fix hmac_digest from the SG breakage.
Crypto now uses SG helper functions.  Fix hmac_digest to use those
functions correctly and fix the oops associated with it.

Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-25 18:46:26 -07:00
Jens Axboe 642f149031 SG: Change sg_set_page() to take length and offset argument
Most drivers need to set length and offset as well, so may as well fold
those three lines into one.

Add sg_assign_page() for those two locations that only needed to set
the page, where the offset/length is set outside of the function context.

Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-24 11:20:47 +02:00
Jens Axboe 78c2f0b8c2 [SG] Update crypto/ to sg helpers
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-22 19:40:16 +02:00
John Anthony Kazos Jr 991d17403c crypto: convert "crypto" subdirectory to UTF-8
Convert the subdirectory "crypto" to UTF-8. The files changed are
<crypto/fcrypt.c> and <crypto/api.c>.

Signed-off-by: John Anthony Kazos Jr. <jakj@j-a-k-j.com>
Signed-off-by: Adrian Bunk <bunk@kernel.org>
2007-10-19 23:06:17 +02:00
Jens Axboe ab83407e9e crypto: don't pollute the global namespace with sg_next()
It's a subsystem function, prefix it as such.

Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-10-16 11:07:09 +02:00
Jan Glauber 5265eeb2b0 [CRYPTO] sha: Add header file for SHA definitions
There are currently several SHA implementations that all define their own
initialization vectors and size values. Since this values are idential
move them to a header file under include/crypto.

Signed-off-by: Jan Glauber <jang@de.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:50 -07:00
Sebastian Siewior ad5d27899f [CRYPTO] sha: Load the SHA[1|256] module by an alias
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.

Additionally it ensures that the generic implementation as well as the
HW driver (if available) is loaded in case the HW driver needs the
generic version as fallback in corner cases.

Also remove the probe for sha1 in padlock's init code.

Quote from Herbert:
  The probe is actually pointless since we can always probe when
  the algorithm is actually used which does not lead to dead-locks
  like this.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:50 -07:00
Sebastian Siewior f8246af005 [CRYPTO] aes: Rename aes to aes-generic
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.

Additionally it ensures that the generic implementation as well as the
HW driver (if available) is loaded in case the HW driver needs the
generic version as fallback in corner cases.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:49 -07:00
Sebastian Siewior c5a511f1cd [CRYPTO] des: Rename des to des-generic
Loading the crypto algorithm by the alias instead of by module directly
has the advantage that all possible implementations of this algorithm
are loaded automatically and the crypto API can choose the best one
depending on its priority.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:49 -07:00
Herbert Xu 7607bd8ff0 [CRYPTO] blkcipher: Added blkcipher_walk_virt_block
This patch adds the helper blkcipher_walk_virt_block which is similar to
blkcipher_walk_virt but uses a supplied block size instead of the block
size of the block cipher.  This is useful for CTR where the block size is
1 but we still want to walk by the block size of the underlying cipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:48 -07:00
Herbert Xu 2614de1b9a [CRYPTO] blkcipher: Increase kmalloc amount to aligned block size
Now that the block size is no longer a multiple of the alignment, we need to
increase the kmalloc amount in blkcipher_next_slow to use the aligned block
size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:48 -07:00
Herbert Xu d8058480b3 [CRYPTO] api: Explain the comparison on larval cra_name
This patch adds a comment to explain why we compare the cra_driver_name of
the algorithm being registered against the cra_name of a larval as opposed
to the cra_driver_name of the larval.

In fact larvals have only one name, cra_name which is the name that was
requested by the user.  The test here is simply trying to find out whether
the algorithm being registered can or can not satisfy the larval.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:47 -07:00
Herbert Xu 70613783fc [CRYPTO] blkcipher: Remove alignment restriction on block size
Previously we assumed for convenience that the block size is a multiple of
the algorithm's required alignment.  With the pending addition of CTR this
will no longer be the case as the block size will be 1 due to it being a
stream cipher.  However, the alignment requirement will be that of the
underlying implementation which will most likely be greater than 1.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:46 -07:00
Herbert Xu e4c5c6c9b0 [CRYPTO] authenc: Kill spaces in algorithm names
We do not allow spaces in algorithm names or parameters.  Thanks to Joy Latten
for pointing this out.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:46 -07:00
Herbert Xu 720a650f8a [CRYPTO] cryptomgr: Fix parsing of recursive algorithms
As Joy Latten points out, inner algorithm parameters will miss the closing
bracket which will also cause the outer algorithm to terminate prematurely.

This patch fixes that also kills the WARN_ON if the number of parameters
exceed the maximum as that is a user error.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:45 -07:00
Rik Snel f19f5111c9 [CRYPTO] xts: XTS blockcipher mode implementation without partial blocks
XTS currently considered to be the successor of the LRW mode by the IEEE1619
workgroup. LRW was discarded, because it was not secure if the encyption key
itself is encrypted with LRW.

XTS does not have this problem. The implementation is pretty straightforward,
a new function was added to gf128mul to handle GF(128) elements in ble format.
Four testvectors from the specification
	http://grouper.ieee.org/groups/1619/email/pdf00086.pdf
were added, and they verify on my system.

Signed-off-by: Rik Snel <rsnel@cube.dyndns.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:45 -07:00
Ingo Oeser 5aaff0c8f7 [CRYPTO] blkcipher: Use max() in blkcipher_get_spot() to state the intention
Use max in blkcipher_get_spot() instead of open coding it.

Signed-off-by: Ingo Oeser <ioe-lkml@rameria.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:44 -07:00
Herbert Xu 70dec235d8 [CRYPTO] api: Kill crypto_km_types
When scatterwalk is built as a module digest.c was broken because it
requires the crypto_km_types structure which is in scatterwalk.  This
patch removes the crypto_km_types structure by encoding the logic into
crypto_kmap_type directly.

In fact, this even saves a few bytes of code (not to mention the data
structure itself) on i386 which is about the only place where it's
needed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:44 -07:00
Herbert Xu 3c09f17c3d [CRYPTO] aead: Add authenc
This patch adds the authenc algorithm which constructs an AEAD algorithm
from an asynchronous block cipher and a hash.  The construction is done
by concatenating the encrypted result from the cipher with the output
from the hash, as is used by the IPsec ESP protocol.

The authenc algorithm exists as a template with four parameters:

	authenc(auth, authsize, enc, enckeylen).

The authentication algorithm, the authentication size (i.e., truncating
the output of the authentication algorithm), the encryption algorithm,
and the encryption key length.  Both the size field and the key length
field are in bytes.  For example, AES-128 with SHA1-HMAC would be
represented by

	authenc(hmac(sha1), 12, cbc(aes), 16)

The key for the authenc algorithm is the concatenation of the keys for
the authentication algorithm with the encryption algorithm.  For the
above example, if a key of length 36 bytes is given, then hmac(sha1)
would receive the first 20 bytes while the last 16 would be given to
cbc(aes).

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:43 -07:00
Herbert Xu 5fa0fea274 [CRYPTO] scatterwalk: Add scatterwalk_map_and_copy
This patch adds the function scatterwalk_map_and_copy which reads or
writes a chunk of data from a scatterlist at a given offset.  It will
be used by authenc which would read/write the authentication data at
the end of the cipher/plain text.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:42 -07:00
Herbert Xu e962a653f3 [CRYPTO] api: Move scatterwalk into algapi
The scatterwalk code is only used by algorithms that can be built as
a module.  Therefore we can move it into algapi.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:41 -07:00
Herbert Xu 2de98e7544 [CRYPTO] ablkcipher: Remove queue pointer from common alg object
Since not everyone needs a queue pointer and those who need it can
always get it from the context anyway the queue pointer in the
common alg object is redundant.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:41 -07:00
Herbert Xu 791b4d5f73 [CRYPTO] api: Add missing headers for setkey_unaligned
This patch ensures that kernel.h and slab.h are included for
the setkey_unaligned function.  It also breaks a couple of
long lines.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:40 -07:00
Herbert Xu 39e1ee011f [CRYPTO] api: Add support for multiple template parameters
This patch adds support for having multiple parameters to
a template, separated by a comma.  It also adds support
for integer parameters in addition to the current algorithm
parameter type.

This will be used by the authenc template which will have
four parameters: the authentication algorithm, the encryption
algorithm, the authentication size and the encryption key
length.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:40 -07:00
Herbert Xu 1ae978208e [CRYPTO] api: Add aead crypto type
This patch adds crypto_aead which is the interface for AEAD
(Authenticated Encryption with Associated Data) algorithms.

AEAD algorithms perform authentication and encryption in one
step.  Traditionally users (such as IPsec) would use two
different crypto algorithms to perform these.  With AEAD
this comes down to one algorithm and one operation.

Of course if traditional algorithms were used we'd still
be doing two operations underneath.  However, real AEAD
algorithms may allow the underlying operations to be
optimised as well.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:39 -07:00
Hye-Shik Chang e2ee95b8c6 [CRYPTO] seed: New cipher algorithm
This patch adds support for the SEED cipher (RFC4269).

This patch have been used in few VPN appliance vendors in Korea for
several years.  And it was verified by KISA, who developed the
algorithm itself.

As its importance in Korean banking industry, it would be great
if linux incorporates the support.

Signed-off-by: Hye-Shik Chang <perky@FreeBSD.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:38 -07:00
Adrian Bunk a349365e5e [CRYPTO] Kconfig: Remove "default m"s
Other options requiring specific block cipher algorithms already have
the appropriate select's.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-10-10 16:55:36 -07:00
Dan Williams 6247cdc2cd async_tx: fix dma_wait_for_async_tx
Fix dma_wait_for_async_tx to not loop forever in the case where a
dependency chain is longer than two entries.  This condition will not
happen with current in-kernel drivers, but fix it for future drivers.

Found-by: Saeed Bishara <saeed.bishara@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-09-24 10:26:26 -07:00
Herbert Xu 32528d0fbd [CRYPTO] blkcipher: Fix inverted test in blkcipher_get_spot
The previous patch had the conditional inverted.  This patch fixes it
so that we return the original position if it does not straddle a page.

Thanks to Bob Gilligan for spotting this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-09-10 15:51:11 +08:00
Herbert Xu e4630f9fd8 [CRYPTO] blkcipher: Fix handling of kmalloc page straddling
The function blkcipher_get_spot tries to return a buffer of
the specified length that does not straddle a page.  It has
an off-by-one bug so it may advance a page unnecessarily.

What's worse, one of its callers doesn't provide a buffer
that's sufficiently long for this operation.

This patch fixes both problems.  Thanks to Bob Gilligan for
diagnosing this problem and providing a fix.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-09-09 08:45:21 +01:00
Sebastian Siewior 0681717678 [CRYPTO] api: fix writting into unallocated memory in setkey_aligned
setkey_unaligned() commited in ca7c39385c
overwrites unallocated memory in the following memset() because
I used the wrong buffer length.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-08-06 15:33:56 +08:00
Dan Williams eb0645a8b1 async_tx: fix kmap_atomic usage in async_memcpy
Andrew Morton:
	[async_memcpy] is very wrong if both ASYNC_TX_KMAP_DST and
	ASYNC_TX_KMAP_SRC can ever be set.  We'll end up using the same kmap
	slot for both src add dest and we get either corrupted data or a BUG.

Evgeniy Polyakov:
	Btw, shouldn't it always be kmap_atomic() even if flag is not set.
	That pages are usual one returned by alloc_page().

So fix the usage of kmap_atomic and kill the ASYNC_TX_KMAP_DST and
ASYNC_TX_KMAP_SRC flags.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-20 08:44:19 -07:00
Pavel Emelianov 13d31894b3 Make crypto API use seq_list_xxx helpers
Simple and stupid - just use the same code from another place in the kernel.

Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-16 09:05:42 -07:00
David S. Miller d09f51b699 Merge master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6
Conflicts:

	crypto/Kconfig
2007-07-14 23:47:04 -07:00
Dan Williams 9bc89cd82d async_tx: add the async_tx api
The async_tx api provides methods for describing a chain of asynchronous
bulk memory transfers/transforms with support for inter-transactional
dependencies.  It is implemented as a dmaengine client that smooths over
the details of different hardware offload engine implementations.  Code
that is written to the api can optimize for asynchronous operation and the
api will fit the chain of operations to the available offload resources. 
 
	I imagine that any piece of ADMA hardware would register with the
	'async_*' subsystem, and a call to async_X would be routed as
	appropriate, or be run in-line. - Neil Brown

async_tx exploits the capabilities of struct dma_async_tx_descriptor to
provide an api of the following general format:

struct dma_async_tx_descriptor *
async_<operation>(..., struct dma_async_tx_descriptor *depend_tx,
			dma_async_tx_callback cb_fn, void *cb_param)
{
	struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>);
	struct dma_device *device = chan ? chan->device : NULL;
	int int_en = cb_fn ? 1 : 0;
	struct dma_async_tx_descriptor *tx = device ?
		device->device_prep_dma_<operation>(chan, len, int_en) : NULL;

	if (tx) { /* run <operation> asynchronously */
		...
		tx->tx_set_dest(addr, tx, index);
		...
		tx->tx_set_src(addr, tx, index);
		...
		async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
	} else { /* run <operation> synchronously */
		...
		<operation>
		...
		async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
	}

	return tx;
}

async_tx_find_channel() returns a capable channel from its pool.  The
channel pool is organized as a per-cpu array of channel pointers.  The
async_tx_rebalance() routine is tasked with managing these arrays.  In the
uniprocessor case async_tx_rebalance() tries to spread responsibility
evenly over channels of similar capabilities.  For example if there are two
copy+xor channels, one will handle copy operations and the other will
handle xor.  In the SMP case async_tx_rebalance() attempts to spread the
operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor
channel0 while cpu1 gets copy channel 1 and xor channel 1.  When a
dependency is specified async_tx_find_channel defaults to keeping the
operation on the same channel.  A xor->copy->xor chain will stay on one
channel if it supports both operation types, otherwise the transaction will
transition between a copy and a xor resource.

Currently the raid5 implementation in the MD raid456 driver has been
converted to the async_tx api.  A driver for the offload engines on the
Intel Xscale series of I/O processors, iop-adma, is provided in a later
commit.  With the iop-adma driver and async_tx, raid456 is able to offload
copy, xor, and xor-zero-sum operations to hardware engines.
 
On iop342 tiobench showed higher throughput for sequential writes (20 - 30%
improvement) and sequential reads to a degraded array (40 - 55%
improvement).  For the other cases performance was roughly equal, +/- a few
percentage points.  On a x86-smp platform the performance of the async_tx
implementation (in synchronous mode) was also +/- a few percentage points
of the original implementation.  According to 'top' on iop342 CPU
utilization drops from ~50% to ~15% during a 'resync' while the speed
according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s.
 
The tiobench command line used for testing was: tiobench --size 2048
--block 4096 --block 131072 --dir /mnt/raid --numruns 5
* iop342 had 1GB of memory available

Details:
* if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making
  async_tx_find_channel a static inline routine that always returns NULL
* when a callback is specified for a given transaction an interrupt will
  fire at operation completion time and the callback will occur in a
  tasklet.  if the the channel does not support interrupts then a live
  polling wait will be performed
* the api is written as a dmaengine client that requests all available
  channels
* In support of dependencies the api implicitly schedules channel-switch
  interrupts.  The interrupt triggers the cleanup tasklet which causes
  pending operations to be scheduled on the next channel
* Xor engines treat an xor destination address differently than a software
  xor routine.  To the software routine the destination address is an implied
  source, whereas engines treat it as a write-only destination.  This patch
  modifies the xor_blocks routine to take a an explicit destination address
  to mirror the hardware.

Changelog:
* fixed a leftover debug print
* don't allow callbacks in async_interrupt_cond
* fixed xor_block changes
* fixed usage of ASYNC_TX_XOR_DROP_DEST
* drop dma mapping methods, suggested by Chris Leech
* printk warning fixups from Andrew Morton
* don't use inline in C files, Adrian Bunk
* select the API when MD is enabled
* BUG_ON xor source counts <= 1
* implicitly handle hardware concerns like channel switching and
  interrupts, Neil Brown
* remove the per operation type list, and distribute operation capabilities
  evenly amongst the available channels
* simplify async_tx_find_channel to optimize the fast path
* introduce the channel_table_initialized flag to prevent early calls to
  the api
* reorganize the code to mimic crypto
* include mm.h as not all archs include it in dma-mapping.h
* make the Kconfig options non-user visible, Adrian Bunk
* move async_tx under crypto since it is meant as 'core' functionality, and
  the two may share algorithms in the future
* move large inline functions into c files
* checkpatch.pl fixes
* gpl v2 only correction

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-By: NeilBrown <neilb@suse.de>
2007-07-13 08:06:14 -07:00
Dan Williams 685784aaf3 xor: make 'xor_blocks' a library routine for use with async_tx
The async_tx api tries to use a dma engine for an operation, but will fall
back to an optimized software routine otherwise.  Xor support is
implemented using the raid5 xor routines.  For organizational purposes this
routine is moved to a common area.

The following fixes are also made:
* rename xor_block => xor_blocks, suggested by Adrian Bunk
* ensure that xor.o initializes before md.o in the built-in case
* checkpatch.pl fixes
* mark calibrate_xor_blocks __init, Adrian Bunk

Cc: Adrian Bunk <bunk@stusta.de>
Cc: NeilBrown <neilb@suse.de>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2007-07-13 08:06:14 -07:00
Sebastian Siewior e559e91cce [CRYPTO] api: Allow ablkcipher with no queues
Evgeniy's hifn driver and probably mine don't use ablkcipher->queue at all. 
The show method of ablkcipher will access this field without checking if it 
is valid.

Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:55 +08:00
Sebastian Siewior ca7c39385c [CRYPTO] api: Handle unaligned keys in setkey
setkey() in {cipher,blkcipher,ablkcipher,hash}.c does not respect the
requested alignment by the algorithm. This patch fixes it. The extra
memory is allocated by kmalloc() with GFP_ATOMIC flag.

Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:54 +08:00
Herbert Xu fe3c5206ad [CRYPTO] api: Wake up all waiters when larval completes
Right now when a larval matures or when it dies of an error we
only wake up one waiter.  This would cause other waiters to timeout
unnecessarily.  This patch changes it to use complete_all to wake
up all waiters.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:53 +08:00
Jan Engelhardt 2e290f43dd [CRYPTO] Kconfig: Use menuconfig objects
Use menuconfigs instead of menus, so the whole menu can be disabled at once
instead of going through all options.

Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-07-11 20:58:53 +08:00
Rafael J. Wysocki 189fe3174c [CRYPTO] cryptd: Fix problem with cryptd and the freezer
Make sure that cryptd is marked as nonfreezable and does not hold up the
freezer.

Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-31 18:10:22 +10:00
Herbert Xu da7cd59ab9 [CRYPTO] api: Read module pointer before freeing algorithm
The function crypto_mod_put first frees the algorithm and then drops
the reference to its module.  Unfortunately we read the module pointer
which after freeing the algorithm and that pointer sits inside the
object that we just freed.

So this patch reads the module pointer out before we free the object.

Thanks to Luca Tettamanti for reporting this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-19 14:51:00 +10:00
Herbert Xu 29059d12e0 [CRYPTO] tcrypt: Add missing error check
The return value of crypto_hash_final isn't checked in test_hash_cycles.
This patch corrects this.  Thanks to Eric Sesterhenn for reporting this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-18 16:25:19 +10:00
David Sterba 3dde6ad8fc Fix trivial typos in Kconfig* files
Fix several typos in help text in Kconfig* files.

Signed-off-by: David Sterba <dave@jikos.cz>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
2007-05-09 07:12:20 +02:00
Herbert Xu 1605b8471d [CRYPTO] cryptomgr: Fix use after free
By the time kthread_run returns the param may have already been freed
so writing the returned thread_struct pointer to param is wrong.

In fact, we don't need it in param anyway so this patch simply puts it
on the stack.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-09 13:04:39 +10:00
Herbert Xu 124b53d020 [CRYPTO] cryptd: Add software async crypto daemon
This patch adds the cryptd module which is a template that takes a
synchronous software crypto algorithm and converts it to an asynchronous
one by executing it in a kernel thread.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:32 +10:00
Herbert Xu a73e69965f [CRYPTO] api: Do not remove users unless new algorithm matches
As it is whenever a new algorithm with the same name is registered
users of the old algorithm will be removed so that they can take
advantage of the new algorithm.  This presents a problem when the
new algorithm is not equivalent to the old algorithm.  In particular,
the new algorithm might only function on top of the existing one.

Hence we should not remove users unless they can make use of the
new algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:32 +10:00
Herbert Xu cf02f5da94 [CRYPTO] cryptomgr: Fix parsing of nested templates
This patch allows the use of nested templates by allowing the use of
brackets inside a template parameter.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:31 +10:00
Herbert Xu b5b7f08869 [CRYPTO] api: Add async blkcipher type
This patch adds the mid-level interface for asynchronous block ciphers.
It also includes a generic queueing mechanism that can be used by other
asynchronous crypto operations in future.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:31 +10:00
Herbert Xu ebc610e5bc [CRYPTO] templates: Pass type/mask when creating instances
This patch passes the type/mask along when constructing instances of
templates.  This is in preparation for templates that may support
multiple types of instances depending on what is requested.  For example,
the planned software async crypto driver will use this construct.

For the moment this allows us to check whether the instance constructed
is of the correct type and avoid returning success if the type does not
match.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:31 +10:00
Herbert Xu 6158efc090 [CRYPTO] tcrypt: Use async blkcipher interface
This patch converts the tcrypt module to use the asynchronous block cipher
interface.  As all synchronous block ciphers can be used through the async
interface, tcrypt is still able to test them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:30 +10:00
Herbert Xu 32e3983fe5 [CRYPTO] api: Add async block cipher interface
This patch adds the frontend interface for asynchronous block ciphers.
In addition to the usual block cipher parameters, there is a callback
function pointer and a data pointer.  The callback will be invoked only
if the encrypt/decrypt handlers return -EINPROGRESS.  In other words,
if the return value of zero the completion handler (or the equivalent
code) needs to be invoked by the caller.

The request structure is allocated and freed by the caller.  Its size
is determined by calling crypto_ablkcipher_reqsize().  The helpers
ablkcipher_request_alloc/ablkcipher_request_free can be used to manage
the memory for a request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:30 +10:00
Herbert Xu 03f5d8cedb [CRYPTO] api: Proc functions should be marked as unused
The proc functions were incorrectly marked as used rather than unused.
They may be unused if proc is disabled.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-05-02 14:38:29 +10:00
Jouni Malinen 85d32e7b0e [PATCH] Update my email address from jkmaline@cc.hut.fi to j@w1.fi
After 13 years of use, it looks like my email address is finally going
to disappear. While this is likely to drop the amount of incoming spam
greatly ;-), it may also affect more appropriate messages, so let's
update my email address in various places. In addition, Host AP mailing
list is subscribers-only and linux-wireless can also be used for
discussing issues related to this driver which is now shown in
MAINTAINERS.

Signed-off-by: Jouni Malinen <j@w1.fi>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2007-04-28 11:01:01 -04:00
Herbert Xu 9f11672728 [CRYPTO] api: Flush the current page right than the next
On platforms where flush_dcache_page is needed we're currently flushing
the next page right than the one we've just processed.  This patch fixes
the off-by-one error.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-03-31 12:58:20 +10:00
Herbert Xu 4ee531a3e6 [CRYPTO] api: Use the right value when advancing scatterwalk_copychunks
In the scatterwalk_copychunks loop, We should be advancing by
len_this_page and not nbytes.  The latter is the total length.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-03-31 12:16:20 +10:00
Sebastian Siewior 7bc301e97b [CRYPTO] tcrypt: Fix error checking for comp allocation
This patch fixes loading the tcrypt module while deflate isn't available
at all (isn't build).

Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-03-21 08:58:43 +11:00
J. Bruce Fields f70ee5ec8f [CRYPTO] api: scatterwalk_copychunks() fails to advance through scatterlist
In the loop in scatterwalk_copychunks(), if walk->offset is zero,
then scatterwalk_pagedone rounds that up to the nearest page boundary:

		walk->offset += PAGE_SIZE - 1;
		walk->offset &= PAGE_MASK;

which is a no-op in this case, so we don't advance to the next element
of the scatterlist array:

		if (walk->offset >= walk->sg->offset + walk->sg->length)
			scatterwalk_start(walk, sg_next(walk->sg));

and we end up copying the same data twice.

It appears that other callers of scatterwalk_{page}done first advance
walk->offset, so I believe that's the correct thing to do here.

This caused a bug in NFS when run with krb5p security, which would
cause some writes to fail with permissions errors--for example, writes
of less than 8 bytes (the des blocksize) at the start of a file.

A git-bisect shows the bug was originally introduced by
5c64097aa0, first in 2.6.19-rc1.

Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2007-03-21 08:50:12 +11:00
Arjan van de Ven 2b8693c061 [PATCH] mark struct file_operations const 3
Many struct file_operations in the kernel can be "const".  Marking them const
moves these to the .rodata section, which avoids false sharing with potential
dirty data.  In addition it'll catch accidental writes at compile time to
these shared resources.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-02-12 09:48:45 -08:00